Getting Started
Like so many of my projects, I'm coming back to this one after several months of not touching it so I need to reorient myself a bit and then continue stumbling forward until I end up with something that works. The end goal is to be able to deploy a series of docker containers that make up a web application using Docker Cloud.
So far I have three docker files; one for Postgres, another that expands on Postgres to install PostGIS, and the third that incorporates these and all the other required services for my Django web application. I've also already created a private repository on Docker Cloud.
Step one is to deploy a node in Docker Cloud.
Deploy a Node
The needs of my web application are light at the moment so I've disabled swarm mode in Docker Cloud. For the sake of cost, I'll be sticking with a single server instance.
I'll be launching on Digital Ocean which I've linked to my Docker Cloud account. Digital Ocean recently increased the sizes of their droplets and those changes don't yet seem to be reflected in the node creation options on Docker Cloud so I'm uncertain what size droplet I'll actually end up with. I'm aiming for the $10/month size which should be 2 GB of RAM 50 GB of disk space and a single vCPU. On Docker Cloud I'm choosing what used to be the $10/mo option, listed as 1 GB of RAM, 30 GB of disk space, and a single vCPU. Hopefully when I create this, it will either create the upgraded version of the droplet or allow me resize it afterwards.
As it turns out, the created droplet does reflect the options in Docker Cloud rather than the recent upgrade to droplet sizes on Digital Ocean. This is bad, because the node created by Docker Cloud still costs $10/month but with 1 less GB of RAM and 20 GB less disk space. So the first thing I'm going to do is power off the newly created droplet and resize it through the Digital Ocean web interface.
Powering off the droplet involves it's own problem solving of course since the Docker Cloud automated deployment doesn't copy over any ssh keys you may have added to your Digital Ocean account. We have to do that ourselves and the process isn't documented very clearly.
Connecting to the Node
I could just hard reset the droplet thru the Digital Ocean web interface but I'll want to ssh to the node at some point so I may as well tackle this problem first. The Docker Cloud docs don't go into much detail on this topic, but you apparently need to create an application stack that will copy over your ssh key to all of your nodes.
It should look like this:
authorizedkeys:
autodestroy: always
deployment_strategy: every_node
environment:
- AUTHORIZED_KEYS=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDY....
image: 'dockercloud/authorizedkeys:latest'
volumes:
- '/root:/user'
You can get the value for AUTHORIZED_KEYS by running the more command on whichever public key you want to use as your key pair to connect to the containers. Make sure you include all of the gobbilty-gook except for the last part that includes your email or .local address.
Interestingly, this granted me access to the node droplet, but after shutting it down, using the command shutdown now, I can no longer connect. Digital Ocean reports that the droplet is still running so I'm not sure if the droplet never shutdown, or if Docker Cloud immediately restarted it after it went down. Attempting to restart the authorized keys stack file has no effect.
So I'll just go ahead and attempt a hard reset thru Digital Ocean to see what happens...
...and it worked! I was able to resize the droplet to get an extra gig of RAM and 20 gigs of disk space so that's pretty swell. After turning the droplet back on, I can also ssh into it as the root user so my public key is still there and functioning as it should.
Deploying the Web App
Now that we have our node configured properly, we need to add services to it. I'll be using images from a private repository on Docker Hub. The repository is already setup, but is currently empty, so I'll need to add my images to it first.
First to login on the command line with my Docker Hub credentials:
docker login --username=dockerhubusername
I'm prompted for my password, after which I'm told I've successfully logged in. Now, I tag the image I want to push to the repository:
docker tag cc0988358705 dockerhubusername/repository-name:django
And push the to the repository:
docker push dockerhubusername/repository-name
Since I already have a docker-compose.yml file which I can been using for testing, I can copy that file and adapt for use as my docker-cloud.yml file. The first change will be to point the image keys in the file to the remote repository I pushed my images to. For my app, I need two from my private repository (django and PostGIS) and two from their own repositories (Redis and RabbitMQ).
I can remove the reference to the volume containing my code since that will already exist in the image the service is pulling from. I also need to remove the build and depends_on config values since those aren't supported.
One thing I don't like is that I have to hardcode my passwords into the Docker Cloud config file. Unfortunately, I couldn't figure out a reliable workaround since the most obvious solutions were made much harder by the fact that unlike in Docker Compose config files, specifying an env_file also is not supported by Docker Cloud.
Other than that, the config file stays the same compared to the Compose config.