Production is like development but with more stress and frustration
Just when I had my Docker Cloud deployment all figured out, Docker went ahead and ruined everything by announcing they were dropping support for it in May. So here I am trying to figure out how to use Docker Compose in production. Luckily, I already had a working implementation of Docker Compose for development. Unfortunately, docker didn't want to make things too easy for me.
I wish it was as easy as provisioning a new host with Docker Machine, targeting the remote host by configuring the DOCKER_HOST environment variable, and then firing off my docker-compose up command. Sadly, that is not the case. While I can get everything running perfectly on my local host with docker-compose up, when I target the remote machine, my containers complain that their run scripts don't exist:
ERROR: for cts_worker_1 Cannot start service worker: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"./run_celery.sh\": stat ./run_celery.sh: no such file or directory": unknown
ERROR: for cts_web_1 Cannot start service web: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"./run_web.sh\": stat ./run_web.sh: no such file or directory": unknown
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"./run_web.sh\": stat ./run_web.sh: no such file or directory": unknown
ERROR: for worker Cannot start service worker: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"./run_celery.sh\": stat ./run_celery.sh: no such file or directory": unknown
ERROR: Encountered errors while bringing up the project.
It seems this is because the run scripts are added as relative volume mounts:
web:
volumes:
- .:/code
Since the notation for this mounting is [host_path]:[container_path] and my DOCKER_HOST environment variable is set to the remote machine, docker is actually looking in the /root folder of the remote docker machine I provisioned. I'd prefer if it didn't work this way, but it does so I'll have to make do.
One potential solution is to use the relatively new docker-machine mount command to sync my code to the remote machine. I however, couldn't get this to work in a way that made sense to me. For one, the docs don't provide much help. They also make it seem like the command is only designed to sync files from the machine to localhost rather than the other way around. Anyway, I tried executing the following in root directory of the source code I wanted synced to my remote machine:
docker-machine mount [machine-name]:[machine-mount-point] .
Fuse, presumably the underlying tool dealing with the mount, complained that the directory wasn't empty, and invited me to use the "nonempty" option if I was feeling brave. My frustration was close enough to bravery so I tried it, and oddly, the option doesn't even seem supported by docker machine. Out of curiosity, I also created a new empty folder and mounted that, which worked, but that doesn't actually solve my issue. Maybe I could copy my code into that new folder, but then I'd have a duplicate local copy of everything which would be silly.
I considered docker-machine scp but quickly realized how inefficient it was for dealing with code changes.
Then, after reading what seemed like hundreds of GitHub issues and SO questions, I realized that the solution was actually much more simple. I had been mislead from the very beginning by following the Docker docs for setting up a django project. I didn't have any need to mount the code as a volume at all since my Dockerfile copied the code into the container already:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install -y \
binutils \
libproj-dev \
gdal-bin \
libgeoip1 \
python-gdal
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
After deleting that relative bind mount volume, everything worked hunky dory.