Docker Workflow from Development to Production - Part 2

Now that we know what Docker is and have it installed, let's look at how we can use it to deploy an app.

We are working on a project now as part of HackReactor's curriculum. Our project involves a MySQL database and ExpressJS API on the back-end handling basic CRUD operations, and an AngularJS front-end.

In our case, the app will be hosted on a DigitalOcean droplet, but this same procedure can be used to deploy to any Linux server.

Start by creating a new DigitalOcean droplet using the Docker application image.

The directory structure of our app looks like this:

- app
 - client
 - server
 - tests


FROM mysql
COPY server/config/schema.sql schema.sql

This Dockerfile will be built with the command

docker build -t yourusername/mysql:v1 -f mysql_Dockerfile .

This will create a docker image. You can use the docker run command to spin up containers of this image. You can also share this image with others and push this image to the DockerHub Registry.

The -t flag lets you name your image so you can share it or later reference it by name. the -f flag lets you point to a specific Dockerfile. The period at the end of the command tells docker build to look in the current directory for the file named. If you omit the -f flag, docker build will automatically use the file named 'Dockerfile'.

In the case of the above MySQL docker container, we are initializing a new MySQL database using the schema defined at server/config/schema.sql. Using this method, at some later point, we need to import the schema using the command docker exec myapp-db /bin/bash -c "mysql -uroot -ppassword < schema.sql". If instead, we wanted to use an already established MySQL database, we could replace the COPY server/config/schema.sql schema.sql line with COPY my/data/dir /var/lib/mysql. The /var/lib/mysql folder is where the base mysql docker container looks for its database.


FROM node
COPY . /app
RUN /bin/bash -c "npm install nodemon bower -g && npm install && bower install --allow-root"
CMD ["nodemon", "server/server.js"]

This node Dockerfile will be built using the command:

docker build -t yourusername/node:v1 -f node_Dockerfile .

It will copy over the current directory to the /app folder in the container. It will set an environment variable we can use to access our database container and it will install nodemon and any necessary node dependencies. Note: bower install must be run with the --allow-root option since the docker container will be running commands as root.

Once you get these two images built, you should be able to spin up and link the containers together using the following commands:

docker run -d -p 3306:3306 --name myapp-db yourusername/mysql:v1
docker exec myapp-db /bin/bash -c "mysql -uroot -ppassword < schema.sql"
docker run -d -p 3000:3000 --name myapp-web --link myapp-db:myapp-db yourusername/node:v1

The -d flag tells docker to run these containers as daemons in the background. The -p flag maps ports from the docker container to the host container so that you can access the container from the outside world. We use --name to name our containers so that we can use --link to link them. Linking a container causes Docker to automatically create environment variables which can be used to let the containers communicate with each other. For example:

docker exec myapp-web printenv

results in


It also adds an entry to the /etc/hosts file:

docker exec myapp-web cat /etc/hosts

results in    dc32fe6ee719      localhost
...    classroom-db f9b760bd9226

These environment variables and host entries can be used inside our app to communicate between containers. For example:


var knex = require('knex')({
	  client: 'mysql',
	  connection: {
	  	  host: process.env.MYAPP_MYSQL_SERVER || '',
	  	  user: 'root',
	  	  password: process.env.MYAPP_MYSQL_PASSWORD || 'password'
	  	  database: process.env.MYAPP_MYSQL_DB || 'default',
	  	  charset: 'utf8'

If you don't see any errors after running the above docker run commands, run docker ps to see if the containers are up and running.

Stay tuned for part 3 where we explore docker-compose.

Show Comments