March 22, 2015

Docker Setups

Setting up environments are a pain. Sure you can create shell scripts to automate the process, but what happens if they break? Is there a rollback plan? Sometimes you want to do something from scratch over and over so you know a process is repeatable. That’s where Docker comes into play.

Docker was confusing for me to start looking into because it had so many different uses. Deployments, Testing, Development, Cloud management. And all the tools that integrate well with Docker left me confused even more, like puppet, chef, and why I needed github. But after banging my head against the wall for a good few weeks, I have found my way through the DevOps craziness.

Docker is just an environment image creator. What it does is creates an environment, based on a script, and saves each step as a commit (like in github). This allows you to omit steps in setting up an environment, or add new ones, without having to create a new image all over again. Docker does this by how Linux’s OS works, using containers. It looks like you created a full VM, but you are using direct hardware and sharing resources. While it is not isolated enough like a VM, it is most likely isolated enough for what you need. This is useful for devs, testing, deployments because you are guaranteed that only the dependencies YOU want will be in the environment.

Developers like this because they don’t need the excuse of it worked on my box any more. This goes away because if the environment from prod to developer box is the EXACT same (*some restrictions may apply, see docker for details), then you don’t have to worry about environmental issues. Testing is great because I get the isolated data and dependencies I want, so I can test and have confidence my code works. Deployments are awesome because it is a package that I can use with puppet and chef to deploy new docker instances quickly and manage 1000’s of Docker containers easily. And all of this is shared with everyone by using a Docker hub.

I have an example Docker on github: https://raw.githubusercontent.com/supermitsuba/RestApiDiscovery/master/Dockerfile. The best part about this is I can spin up as many environments as I need. I have a VM in the cloud and I have a server at home, and I can use Docker to deploy my code the same. Here is how a docker file looks:

 FROM ubuntu:latest MAINTAINER Jorden Lowe <supermitsuba@gmail.com> RUN apt-get -qq update RUN apt-get -qq install golang RUN apt-get -qq install git RUN mkdir /tmp/GO RUN mkdir /tmp/GO/src ENV GOPATH /tmp/GO RUN git clone https://github.com/supermitsuba/RestApiDiscovery.git /tmp/GO/src/RestApiDiscovery RUN apt-get -qq install mercurial WORKDIR /tmp/GO/src/RestApiDiscovery RUN go get RUN go test RUN go build 

What you will notice is much of this is actually readable. I run a few apt-get commands (no need for sudo or su, you are already root on a container). There are some environment variables being set for GOPATH. I even have a git clone and work directory command where I build the code. Nothing too difficuly, and something you would see in a shell script anyway. You might want to know how to kick this thing off? Well let me throw some commands at you and explain each:

 sudo docker pull <> sudo docker run -d -p 8088:8080 <> <> </pre> 

While I said you didn’t need sudo for inside the container, you do on the host machine. First command you need to make is to pull down the image. The image name is something you get from docker hub, here: https://registry.hub.docker.com/u/supermitsuba/testimages/tags/manage/. Next command is to open a port, with -p to port 8088 with a command. This I found out is the way you expose ports in docker. Probably a good idea, since everything is running as root, you should expose very little as possible.

For more information on the docker commands, check out this site: https://www.digitalocean.com/community/tutorials/docker-explained-using-dockerfiles-to-automate-building-of-images, or you can look at https://www.docker.com/ for documentation and tutorials.