Learning Docker - Part 4 - Creating a LEMP stack using docker

In this tutorial, I am demonstrating how easy it is to create a stack (a development/production environment) of your own using docker-compose. I am going to create a LEMP stack (Linux - nginx - MariaDB - PHP 5.6). I’m not eager to write my own image yet because, well, there are already images available for nginx, MariaDB, and PHP 5.6. What I want is to link them together to create my own service.

  1. Go to docker hub and search for nginx, mariadb, and php, and choose the images that I want to use to compose my stack
    • nginx:1.11.8-alpine
    • mariadb:5.5
    • php:5.6-fpm-alpine

    The reason I choose alpine image over other images is that alpine images are usually much smaller.

  2. Create a project folder for the LEMP stack

     mkdir LEMP
     cd LEMP
    
  3. Create a www folder to contain PHP source code, and make a phpinfo() test script

     mkdir www
     echo "<?php phpinfo(); ?>" > ./www/index.php
    
  4. Create a folder for containing log files

     mkdir logs
    
  5. Create a ./nginx/site.conf file containing the server’s configuration

     server {
         listen 80;
    
         root /www;
    
         index index.html index.htm index.php;
    
         location / {
             try_files $uri $uri/ /index.php?$args;
         }
    
         location ~ \.php$ {
             try_files $uri =404;
             include fastcgi_params;
             fastcgi_pass phpfpm:9000;
             fastcgi_index index.php;
             fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
         }
     }
    

    This is pretty much a standard configuration for an nginx-phpfpm site. Except this line:

     fastcgi_pass phpfpm:9000;
    

    We are going to link the nginx container with the phpfpm container, thus phpfpm will be available in the nginx container’s hosts

  6. Create a docker-compose.yml file with the following content

     version: '2'
     services:
         mariadb:
             image: mariadb:5.5
             user: $UID
             environment:
             - MYSQL_ROOT_PASSWORD=password123
             - MYSQL_DATABASE=yii
             - MYSQL_USER=yii
             - MYSQL_PASSWORD=abcd1234
         phpfpm:
             image: php:5.6-fpm-alpine
             user: $UID
             volumes:
             - ./www:/www
             links:
             - mariadb
         nginx:
             image: nginx:1.11.8-alpine
             user: $UID
             volumes:
             - ./nginx:/etc/nginx/conf.d
             - ./logs:/var/nginx/logs
             - ./www:/www
             ports:
             - 80:80
             links:
             - phpfpm
             command: /bin/sh -c "nginx -g 'daemon off;'"
    

    Things here are pretty straightforward:

    • We have 3 services in our stack: mariadb, nginx, and phpfpm
    • What environment does is obvious
    • ./nginx will be mounted to nginx’s sites folder
    • ./logs will be mounted to nginx’s logs folder
    • ./www will be mounted on both nginx and phpfpm containers, so the php process will also have access to the same file paths as nginx
    • mariadb is linked to phpfpm, and phpfpm is linked to nginx. There’s no need for nginx to link with mariadb, since static files don’t make database connections
  7. Check your folder’s structure

     $ tree ./
     ./
     ├── docker-compose.yml
     ├── logs
     ├── nginx
     │   └── site.conf
     └── www
         └── index.php
    
  8. Run the stack

     export UID
     docker-compose up -d
    
  9. Go to localhost and check if your phpinfo() script has been successfully executed
  10. Testing database connection by replacing the content of ./www/index.php with the following

    <?php
    $servername = "mariadb";
    $username = "yii";
    $password = "abcd1234";
    
    // Create connection
    $conn = new mysqli($servername, $username, $password);
    
    // Check connection
    if ($conn->connect_error) {
        die("Connection failed: " . $conn->connect_error);
    }
    echo "Connected successfully";
    ?>
    

    There should be an error message that says something like this

    Fatal error: Class 'mysqli' not found in /www/index.php on line 7
    

    We’ll just have to install mysqli. Luckily, it isn’t too complicated. The instructions are already on the php docker hub repo

    docker exec lemp_phpfpm_1 docker-php-ext-install mysqli
    

    Restart the service

    docker-compose restart
    
  11. Now, again, connect to localhost. It should say Connected successfully

Learning Docker - Part 3 - Using docker-compose

A problem with creating docker containers is that the docker run command can get a little bit complicated, especially when there are even more configurations. Similarly, when your application would need multiple different docker containers in order to run, then running containers individually and linking them together can be a mess to newbies. Docker-compose solves that problem. With a docker-compose.yml file, you define the services that make up your app, and run them with just one single simple command.

Installation

Follow instructions on Docker’s website

Deploying a wordpress website with docker-compose

  1. Create a folder containing your project

     mkdir wordpress_mysql
     cd wordpress_mysql
    
  2. Create a docker-compose.yml file with the following content

     version: 2
     services:
         mysql:
             image: mysql:latest
             environment:
                 MYSQL_ROOT_PASSWORD: somepassword
         wordpress:
             image: wordpress:latest
             ports:
                 - 8080:80
             depends_on:
                 - "mysql"
             environment:
                 WORDPRESS_DB_PASSWORD: somepassword
    
  3. Initialize the service with docker-compose

     docker-compose up -d
    

    It’s basically done. However, since the mysql container takes some time to initialize, the wordpress container fails when connecting to the mysql database, you have to restart the container

     docker-compose down
     docker-compose up -d
    
  4. Visit your localhost:8080. It works!!!

Learning Docker - Part 2 - Docker cheatsheet

Config to run docker without sudo

By default, docker needs to be run with root privilege, but we don’t want that for various reasons. Follow these steps to allow running docker without sudo:

  1. Add a group named docker:

     sudo groupadd docker
    
  2. Add the user you want to run docker without sudo to group docker

     sudo usermod -aG docker ${USER}
    
  3. Restart docker daemon:

     sudo service docker restart
    
  4. Relogin, or running the following command:

     newgrp docker
    

Note: This allows running docker without using the sudo command. However, files created by docker by default are still owned by root. To truly avoid giving root privilege to docker, specify option -u <user id> when running docker run. For example:

docker run -u `id -u $USER` <image>

Or

docker run -u $UID <image>

If you are using docker-compose, then you should specify the option user in your docker-compose.yml file. For example:

version: '3'
services:
  jekyll:
    user: $UID
    image: btquanto/docker-jekyll
    ports:
      - 4000:4000
    volumes:
      - ./:/src
    command: jekyll serve -H 0.0.0.0 --draft

Basic docker commands

  1. List all images:

     docker images
    
  2. List all running containers:

     docker ps
    

    List all containers:

     docker ps -a
    
  3. Create and run a new container:

     docker run <image name>
    

    Create a new container, and give the container a name

     docker run --name <container name> <image name>
    

    Create a new container under interactive mode, and run bash

     docker run -ti <image name> /bin/bash
    

    Create a new container in detached mode

     docker run -d <image name>
    

    Run docker container under a different user (for security)

     docker run -u `id -u <username>` <image name>
    

    Run docker container under current user

     docker run -u $UID <image name>
    

    Well, you can combine these together, for example:

     docker run -u $UID -dti --name ubuntu1604 ubuntu:16.04 /bin/bash
    
  4. Attach to a container:

     docker attach <container id/ name>
    
  5. Detach from a container:

     <Ctrl> + P,Q
    
  6. Start a container:

     docker start <container id/ name>
    

    Start and attach to a container

     docker start -a <container id/ name>
    
  7. Stop a container:

     docker stop <container id/ name>
    
  8. Remove a container:

     docker rm <container id/ name>
    
  9. Remove an image:

     docker rmi <image name>
    
  10. Create an image from a container:

    docker commit <container id/ name> <new image name>
    

Learning Docker - Part 1 - Getting started

The problems I had

I have been writing a bunch of web application skeletons for later usage, and have been using vagrant + chef for automatic local environment deployment.

At first, things were cool. Everytime I needed to deploy a separate environment for development, I just needed to clone the project, the cookbooks submodules, then ran vagrant up. It took a while to get the virtual machine up and running, and obviously running multiple VMs was not a really healthy choice for my RAM, but it did its job.

Then, a few months later, and I started another project, and the some chef recipes failed, and I had to fix the recipes to get everything up and running again. Some cookbooks were open sourced, and had not been updated for quite some time, so I had to fork the cookbook repo to my github in order to save the changes.

Then, a few weeks later, I trained an undergraduate junior on working one of the frameworks that I have written a skeleton for. I didn’t want to stress much into system administration stuff, and wanted to demonstrate a good example of separated development environment, so I just gave her the skeleton repo. Again, it needed some fixes in order to work.

And so on. The more I work with vagrant + chef, the more I realize that the problem in using chef recipes is that it will at some point fail, and fixing recipes is not always straight forward.

Thus, I have been looking for an alternative

Why Docker

Docker has been around for a while, but I have just truly started learning it recently, and it exceeds my expectation.

There are many videos and blogs introducing what Docker is, what is the differences between containerization and virtualization, so I won’t be repeating them. I’ll just list the reasons why I find Docker fascinating.

  1. The problem with recipes is (almost) gone. Although docker images are usually built with a Dockerfile, and building an image from the same Dockerfile may still fail at some point in the future, the image itself is sharable and reusable.
  2. Thus, when developing an application, I just have to create the isolated development environment once, package it into a docker image, and use the image to create a docker container that has everything ready.
  3. The same image can be reused for different application skeletons. Well, the same can be said for vagrant + chef. However, it can be done in a much simpler manner with docker.
  4. I can add more changes, and create a new image from a base image, extending my application.
  5. It’s lightweight, much more lightweight than VMs, and I have no problems runnign multiple containers at the same time.

I’ll add more reasons later when I can think of something else

Installing docker

You should checkout Docker’s website

First contact

First contact with Docker will be deploying a Jekyll page.

  1. Clone a jekyll template. I’m using Jekyll Mon Cahier in this example. It’s a pretty cool template, so make sure to check it out.

     $ git clone https://github.com/btquanto/jekyll_mon_cahier.git ./jekyll_site
     $ cd ./jekyll_site
    
  2. I don’t want to get involve in building a new jekyll image yet, so I am using btquanto/docker-jekyll from docker hub.

     $ sudo docker pull btquanto/docker-jekyll
    
  3. Create and run a jekyll docker container

     $ sudo docker run -u $UID -d --name jekyll \
         -v `pwd`:/src \
         -p 4000:4000 \
         btquanto/docker-jekyll jekyll serve -H 0.0.0.0
    
  4. Check your localhost:4000. In most cases, it should work. Yey!
  5. When I was using the theme Lanyon. So, let’s see what’s the error, by starting the container again.

     $ sudo docker start -a jekyll
    

    And the error shown is that I don’t have the gem redcarpet

     Dependency Error: Yikes! It looks like you don't have redcarpet or one of its dependencies installed.
    
  6. Let’s install redcarpet by upgrading the btquanto/docker-jekyll image
    1. Remove the failed container

       $ sudo docker rm jekyll
      
    2. Make a Dockerfile with the following content

       FROM btquanto/docker-jekyll
       MAINTAINER [email protected]
       RUN gem install redcarpet
      
    3. Build the docker image

       $ sudo docker build -t my_jekyll_image .
      
  7. Recreate the jekyll container with the new image

     $ sudo docker run -u `id -u $USER` -d \
         --name jekyll \
         -v `pwd`:/src \
         -p 4000:4000 \
         my_jekyll_image jekyll serve -H 0.0.0.0
    
  8. My site is not available at localhost:4000. Yey!

Implement a self-signed SSL certificate to nginx

  1. Generate myapp.key, and myapp.crt. The location is not really important, but make sure that the user that runs nginx has access to it.

     sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /path/to/myapp.key -out /path/to/myapp.crt
    
  2. Then, add ssl configuration to your nginx site config

     listen 443 ssl;
     ssl_certificate       /path/to/myapp.crt;
     ssl_certificate_key   /path/to/myapp.key;
    

    For example:

     server {
         listen 443 ssl;
         ssl on;
    
         # Edit these as fit
         server_name           domain.dot.com;
         ssl_certificate       /path/to/myapp.crt;
         ssl_certificate_key   /path/to/myapp.key;
    
         root                  /path/to/my/app/root/foler;
         access_log            /path/to/log/folder/access.log;
         error_log             /path/to/log/folder/logs/error.log;
    
         location / {
             try_files         $uri @flask;
         }
    
         # My app is a flask app, served at localhost:8000
         location @flask {
             proxy_set_header   Host $http_host;
             proxy_set_header   X-Real-IP $remote_addr;
             proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
    
             proxy_redirect     off;
             proxy_pass         http://127.0.0.1:8000;
         }
     }
    
  3. However, your site not is not serving port 80. To redirect requests from port 80 to port 443 automatically, add another server block

     server {
         listen 80;
         server_name domain.dot.com;
         return 301 https://$host$request_uri;
     }
    
     server {
         listen 443 ssl;
         ssl on;
    
         # Edit these as fit
         server_name           domain.dot.com;
         ssl_certificate       /path/to/myapp.crt;
         ssl_certificate_key   /path/to/myapp.key;
    
         root                  /path/to/my/app/root/foler;
         access_log            /path/to/log/folder/access.log;
         error_log             /path/to/log/folder/logs/error.log;
    
         location / {
             try_files         $uri @flask;
         }
    
         # My app is a flask app, served at localhost:8000
         location @flask {
             proxy_set_header   Host $http_host;
             proxy_set_header   X-Real-IP $remote_addr;
             proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
    
             proxy_redirect     off;
             proxy_pass         http://127.0.0.1:8000;
         }
     }