Programming
docker build development-environment docker-compose
Updated Sat, 21 May 2022 05:06:03 GMT

Docker development workflow for compiled components in a docker-compose setup


I'm working on a service in a 'system' orchestrated using docker-compose. The service is written in a compiled language and I need to rebuild it when I make a change. I'm trying to find the best way to quickly iterate on changes.

I've tried 2 'workflows', both rely on being linked to the source directory via a volume: to get the latest source.

A.
  • Bring up all the supporting containers with docker-compose up -d
  • Stop the container for the service under development
  • Run a new container using the image docker-compose run --name SERVICE --rm SERVICE /bin/bash
  • Within that container run compile and run the application at the exposed port.
  • Restart by stopping the running process and then rebuilding.
B.
  • (requires Dockerfile CMD to build and then run the service)
  • Stop the service: docker-compose kill SERVICE
  • Restart the service docker-compose up -d --no-deps SERVICE

The problem is both take too long to restart vs restarting the service locally (running on my laptop independently of docker). This setup seems to be ok with interpreted languages that can hot-reload changed files but I've yet to find a suitably fast system for compiled language services.




Solution

I would do this:

Run docker-compose up but:

  • use a host volume for the directory of the compiled binary instead of the source
  • use an entrypoint that does something like

entrypoint.sh:

trap "pkill -f the_binary_name" SIGHUP
trap "exit" SIGTERM
while [[ 1 ]]; do
  ./the_binary_name;
done

Write a script to rebuild the binary, and copy it into the volume used by the service in docker-compose.yml:

# Run a container to compile and build the binary
docker run -ti -v $SOURCE:/path -v $DEST:/target some_image build_the_binary
# copy it to the host volume directory
copy $DEST/... /volume/shared/with/running/container
# signal the container
docker kill -s SIGHUP container_name

So to compile the binary you use this script, which mounts the source and a destination directory as volumes. You could skip the copy step if the $DEST is the same as the volume directory shared with the "run" container. Finally the script will signal the running container to have it kill the old process (which was running the old binary) and start the new one.

If the shared volume is making compiling in a container too slow, you could also run the compile on the host and just do the copy and signaling to have it run in a container.

This solution has the added benefit that your "runtime" image doesn't need all the dev dependencies. It could be a very lean image with just a bare OS base.





Comments (1)

  • +1 – Hi, thanks for this in depth answer. This has explained a lot. I've been able to get it working much as you outline here. One difference, I wasn't able to get docker kill -s SIGHUP working, I'm using docker exec web pkill -f container_name instead. This might not be as fast but switching to this method has cut the time for a single 'iteration' down significantly. Thanks. — Jan 19, 2016 at 12:05