Learning Docker - Part 3 - Using docker-compose

A problem with creating docker containers is that the docker run command can get a little bit complicated, especially when there are even more configurations. Similarly, when your application would need multiple different docker containers in order to run, then running containers individually and linking them together can be a mess to newbies. Docker-compose solves that problem. With a docker-compose.yml file, you define the services that make up your app, and run them with just one single simple command.


Follow instructions on Docker’s website

Deploying a wordpress website with docker-compose

  1. Create a folder containing your project

     mkdir wordpress_mysql
     cd wordpress_mysql
  2. Create a docker-compose.yml file with the following content

     version: 2
             image: mysql:latest
                 MYSQL_ROOT_PASSWORD: somepassword
             image: wordpress:latest
                 - 8080:80
                 - "mysql"
                 WORDPRESS_DB_PASSWORD: somepassword
  3. Initialize the service with docker-compose

     docker-compose up -d

    It’s basically done. However, since the mysql container takes some time to initialize, the wordpress container fails when connecting to the mysql database, you have to restart the container

     docker-compose down
     docker-compose up -d
  4. Visit your localhost:8080. It works!!!

Learning Docker - Part 2 - Docker cheatsheet

Config to run docker without sudo

By default, docker needs to be run with root privilege, but we don’t want that for various reasons. Follow these steps to allow running docker without sudo:

  1. Add a group named docker:

     sudo groupadd docker
  2. Add the user you want to run docker without sudo to group docker

     sudo usermod -aG docker ${USER}
  3. Restart docker daemon:

     sudo service docker restart
  4. Relogin, or running the following command:

     newgrp docker

Note: This allows running docker without using the sudo command. However, files created by docker by default are still owned by root. To truly avoid giving root privilege to docker, specify option -u <user id> when running docker run. For example:

docker run -u `id -u $USER` <image>


docker run -u $UID <image>

If you are using docker-compose, then you should specify the option user in your docker-compose.yml file. For example:

version: '3'
    user: $UID
    image: btquanto/docker-jekyll
      - 4000:4000
      - ./:/src
    command: jekyll serve -H --draft

Basic docker commands

  1. List all images:

     docker images
  2. List all running containers:

     docker ps

    List all containers:

     docker ps -a
  3. Create and run a new container:

     docker run <image name>

    Create a new container, and give the container a name

     docker run --name <container name> <image name>

    Create a new container under interactive mode, and run bash

     docker run -ti <image name> /bin/bash

    Create a new container in detached mode

     docker run -d <image name>

    Run docker container under a different user (for security)

     docker run -u `id -u <username>` <image name>

    Run docker container under current user

     docker run -u $UID <image name>

    Well, you can combine these together, for example:

     docker run -u $UID -dti --name ubuntu1604 ubuntu:16.04 /bin/bash
  4. Attach to a container:

     docker attach <container id/ name>
  5. Detach from a container:

     <Ctrl> + P,Q
  6. Start a container:

     docker start <container id/ name>

    Start and attach to a container

     docker start -a <container id/ name>
  7. Stop a container:

     docker stop <container id/ name>
  8. Remove a container:

     docker rm <container id/ name>
  9. Remove an image:

     docker rmi <image name>
  10. Create an image from a container:

    docker commit <container id/ name> <new image name>

Learning Docker - Part 1 - Getting started

The problems I had

I have been writing a bunch of web application skeletons for later usage, and have been using vagrant + chef for automatic local environment deployment.

At first, things were cool. Everytime I needed to deploy a separate environment for development, I just needed to clone the project, the cookbooks submodules, then ran vagrant up. It took a while to get the virtual machine up and running, and obviously running multiple VMs was not a really healthy choice for my RAM, but it did its job.

Then, a few months later, and I started another project, and the some chef recipes failed, and I had to fix the recipes to get everything up and running again. Some cookbooks were open sourced, and had not been updated for quite some time, so I had to fork the cookbook repo to my github in order to save the changes.

Then, a few weeks later, I trained an undergraduate junior on working one of the frameworks that I have written a skeleton for. I didn’t want to stress much into system administration stuff, and wanted to demonstrate a good example of separated development environment, so I just gave her the skeleton repo. Again, it needed some fixes in order to work.

And so on. The more I work with vagrant + chef, the more I realize that the problem in using chef recipes is that it will at some point fail, and fixing recipes is not always straight forward.

Thus, I have been looking for an alternative

Why Docker

Docker has been around for a while, but I have just truly started learning it recently, and it exceeds my expectation.

There are many videos and blogs introducing what Docker is, what is the differences between containerization and virtualization, so I won’t be repeating them. I’ll just list the reasons why I find Docker fascinating.

  1. The problem with recipes is (almost) gone. Although docker images are usually built with a Dockerfile, and building an image from the same Dockerfile may still fail at some point in the future, the image itself is sharable and reusable.
  2. Thus, when developing an application, I just have to create the isolated development environment once, package it into a docker image, and use the image to create a docker container that has everything ready.
  3. The same image can be reused for different application skeletons. Well, the same can be said for vagrant + chef. However, it can be done in a much simpler manner with docker.
  4. I can add more changes, and create a new image from a base image, extending my application.
  5. It’s lightweight, much more lightweight than VMs, and I have no problems runnign multiple containers at the same time.

I’ll add more reasons later when I can think of something else

Installing docker

You should checkout Docker’s website

First contact

First contact with Docker will be deploying a Jekyll page.

  1. Clone a jekyll template. I’m using Jekyll Mon Cahier in this example. It’s a pretty cool template, so make sure to check it out.

     $ git clone https://github.com/btquanto/jekyll_mon_cahier.git ./jekyll_site
     $ cd ./jekyll_site
  2. I don’t want to get involve in building a new jekyll image yet, so I am using btquanto/docker-jekyll from docker hub.

     $ sudo docker pull btquanto/docker-jekyll
  3. Create and run a jekyll docker container

     $ sudo docker run -u $UID -d --name jekyll \
         -v `pwd`:/src \
         -p 4000:4000 \
         btquanto/docker-jekyll jekyll serve -H
  4. Check your localhost:4000. In most cases, it should work. Yey!
  5. When I was using the theme Lanyon. So, let’s see what’s the error, by starting the container again.

     $ sudo docker start -a jekyll

    And the error shown is that I don’t have the gem redcarpet

     Dependency Error: Yikes! It looks like you don't have redcarpet or one of its dependencies installed.
  6. Let’s install redcarpet by upgrading the btquanto/docker-jekyll image
    1. Remove the failed container

       $ sudo docker rm jekyll
    2. Make a Dockerfile with the following content

       FROM btquanto/docker-jekyll
       MAINTAINER [email protected]
       RUN gem install redcarpet
    3. Build the docker image

       $ sudo docker build -t my_jekyll_image .
  7. Recreate the jekyll container with the new image

     $ sudo docker run -u `id -u $USER` -d \
         --name jekyll \
         -v `pwd`:/src \
         -p 4000:4000 \
         my_jekyll_image jekyll serve -H
  8. My site is not available at localhost:4000. Yey!

Implement a self-signed SSL certificate to nginx

  1. Generate myapp.key, and myapp.crt. The location is not really important, but make sure that the user that runs nginx has access to it.

     sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /path/to/myapp.key -out /path/to/myapp.crt
  2. Then, add ssl configuration to your nginx site config

     listen 443 ssl;
     ssl_certificate       /path/to/myapp.crt;
     ssl_certificate_key   /path/to/myapp.key;

    For example:

     server {
         listen 443 ssl;
         ssl on;
         # Edit these as fit
         server_name           domain.dot.com;
         ssl_certificate       /path/to/myapp.crt;
         ssl_certificate_key   /path/to/myapp.key;
         root                  /path/to/my/app/root/foler;
         access_log            /path/to/log/folder/access.log;
         error_log             /path/to/log/folder/logs/error.log;
         location / {
             try_files         $uri @flask;
         # My app is a flask app, served at localhost:8000
         location @flask {
             proxy_set_header   Host $http_host;
             proxy_set_header   X-Real-IP $remote_addr;
             proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
             proxy_redirect     off;
  3. However, your site not is not serving port 80. To redirect requests from port 80 to port 443 automatically, add another server block

     server {
         listen 80;
         server_name domain.dot.com;
         return 301 https://$host$request_uri;
     server {
         listen 443 ssl;
         ssl on;
         # Edit these as fit
         server_name           domain.dot.com;
         ssl_certificate       /path/to/myapp.crt;
         ssl_certificate_key   /path/to/myapp.key;
         root                  /path/to/my/app/root/foler;
         access_log            /path/to/log/folder/access.log;
         error_log             /path/to/log/folder/logs/error.log;
         location / {
             try_files         $uri @flask;
         # My app is a flask app, served at localhost:8000
         location @flask {
             proxy_set_header   Host $http_host;
             proxy_set_header   X-Real-IP $remote_addr;
             proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
             proxy_redirect     off;

Toggle touch pad in Ubuntu

In short, the hot key Fn + F9 (or F8, whatever) to toggle enabling touch pad (track pad) does not always work due to Linux’s poor driver support (It’s the manufacturer’s fault, not Canonical’s).

I kinda came up with the following workaround.

The basic

Basically, an unsupported touch pad will be emulated as a psmouse device. The following command will disable it.

sudo modprobe -r psmouse

To enable it, run

sudo modprobe psmouse

To check if the touch pad is enabled or not, check if psmouse is listed in the result of the following command


A little more advance

So I decided to write a bash script to toggle enabling touch pad, instead of typing different commands for that

if [ "`lsmod | grep -o psmouse`" ];
    sudo modprobe -r psmouse;
    sudo modprobe psmouse;

I put the script in ~/Programs/scripts/toggle-trackpad.sh

Now, all I have to do is to run that script whenever I want to do the toggle. That doesn’t really make my life easier. Then I came up with these ideas.

  1. Using alias

    So, I put this in the alias section of my ~/.bashrc file (I don’t use a separate ~/.bash_aliases file, since it’s not really necessary)

     alias toggle-tp="if [ \"\`lsmod | grep -o psmouse\`\" ]; then sudo modprobe -r psmouse; else sudo modprobe psmouse; fi"

    Ok, that looks nicer. Hummm, but then I will have to open a terminal every time I want to toggle the touch pad. Not catastrophically annoying, but I want things better.

  2. Using Alt + F2 and gksudo

    Firstly, I install gksu. modprobe requires sudo access to work. Without a shell, gksudo is required.

     sudo apt-get install gksu -y

    Then, I change the alias to

     alias toggle-tp="if [ \"\`lsmod | grep -o psmouse\`\" ]; then gksudo 'modprobe -r psmouse'; else gksudo 'modprobe psmouse'; fi"

    After re-login, I can just press Alt + F2, then type toggle-cp to toggle. That’s a bit better, but still, I just want a hot key.

  3. Custom shortcut

    So finally, I decide to edit the script in to use gksudo

     if [ "`lsmod | grep -o psmouse`" ];
         gksudo 'modprobe -r psmouse';
         gksudo 'modprobe psmouse';

    Then config a new keyboard shortcut (I’m using Ctrl + Super + T)

     Name: Toggle Keyboard
     Command: /path/to/toggle-trackpad.sh


Now I can use a hot key to toggle enabling the touch pad. It’s still a little annoying that every time I want to toggle the touch pad, I would have to enter my password. Fortunately, I don’t have to do that too often. I use a mouse. It’s also a little disappointing that I can’t override the default Fn + F9 on my machine, but that’s minor. I’m happy enough.