Docker has become the preferable way and the most easy way to run continuous integration, due to its reproducible nature and fast learning curve.
There are multiple ways approaching CI processes using docker.
- Using docker build, which is the easiest way
FROM node:12
ARG NODE_ENV=production
COPY . /src
RUN npm install
WORKDIR /src
CMD [ "node", "app.js" ]
- Using docker multi step build, this usually helps making the final image smaller and safer without all the build binaries
FROM maven AS Builder
COPY . /src
RUN [ "mvn" , "clean" , "install" ]
FROM openjdk:12-alpine
WORKDIR /srv
COPY --from=Builder /src/target ./srv/
CMD [ "java", "-jar", "app.jar" ]
- Using docker run with a pre-built build image on a volume mount this allows having cache directories and have some more functionality during the build, and will allow multiple commands
docker run -v `$pwd`:/src -v `~/.m2`:/root/.m2 maven mvn clean install
- Using docker in docker, like gitlab, bitbucket etc… This method is more complex than the rest and require you to mount docker socket to the as a volume to the docker container, and can be preformed in 2 ways:
There is an official docker in docker image by docker inc that is tagged as
docker:dind
Running containers on the host machine for the docker in docker container, this will require access to docker socket as a volume inside the docker in docker container.
docker run \ --rm -v `pwd`:/mnt \ -v /mnt/.m2:/root/.m2 \ -v /mnt/.gradle:/root/.gradle \ -v /var/run/docker.sock:/var/run/docker.sock:ro \ -w /mnt docker:dind sh ./ci.sh
And
ci.sh
script can run this example:#!/bin/bash docker run -p 6379:6379 \ -d --name redis redis:alpine docker run -v /root/.m2 \ -v /root/.gradle:/root/.gradle:ro \ -v /mnt:/mnt \ --link redis \ --rm -it -d --name builder gradle /bin/bash docker exec builder gradle build docker exec builder gradle test docker exec builder gradle upload
This script will launch 2 docker container on the host, that will be resolvable because of the
--link
. On the host and inside the docker in docker container the commanddocker ps
will show the same number of containers running (that is 3 containers, docker in docker, redis, gradle).Killing the docker in docker container will not kill the containers on the host machine.
Running the docker daemon inside the docker container and in it running the reset on the containers, seeing only one container running on the host, and multiple ones inside the docker container. This will not require mapping docker socket inside the container, but will require to run the docker in docker container with
--privileged
flagThe –privileged flag gives all capabilities to the container, and it also lifts all the limitations enforced by the device cgroup controller. In other words, the container can then do almost everything that the host can do. This flag exists to allow special use-cases, like running Docker within Docker.
To start the docker in docker container:
docker run \ --rm -v `pwd`:/mnt \ -v /mnt/.m2:/root/.m2 \ -v /mnt/.gradle:/root/.gradle \ --privileged -w /mnt docker:dind sh ./ci.sh
And now the
ci.sh
script will run as follows:#!/bin/bash set -x # Starting docker daemon /usr/local/bin/dockerd --log-level fatal >> /dev/dull & # Wait for the daemon while true; do docker version if [ $? -eq 0 ];then break else sleep 2 fi done docker run -p 6379:6379 \ -d --name redis redis:alpine docker run -v /root/.m2 \ -v /root/.gradle:/root/.gradle:ro \ -v /mnt:/mnt \ --link redis \ --rm -it -d --name builder gradle /bin/bash docker exec builder gradle build docker exec builder gradle test docker exec builder gradle upload
The script will do almost the same as before, but now,
docker ps
on the host machine will show only a single docker container running, the docker in docker daemon. Inside the docker in docker container youdocker ps
will show 2 running containers, redis and builder. once you kill the docker in docker container, the 2 additional CI containers will be terminated, as they are running inside the docker in docker container.This way, you will not have any zombie containers on the host machine, and you will not have naming / IP conflicts on the host machine.
What method is preferable
Well, the answer is: it depends.
I this that keeping it simple is the most important thing when building a CI pipeline. so if using plain docker build
as described on the first bullet, then go for it.
Usually I end up combining docker in docker and docker build one after the other.
Enjoying the docker run
on a prebuilt compiling image that uses host cache for libraries and dependencies, once the run finished building the artifact, than on the host machine I am using a plain docker build
that will take the artifact and COPY
it to a docker alpine image.
For simple applications with couple of dependencies I prefer not adding any complexity and running the entire process in multi-step docker build as mentioned in the second bullet.
Common pitfalls
As always there’s the common pitfalls, one is file permissions and the other is disk space.
As I prefer adding my Jenkins user to docker group and not to sudoers group, sometime docker creates files with root permission (libraries, artifacts etc…), so in the end of a jenkins build, as a post process I will run a simple docker container that does cleanup with root permission
docker run --rm -v `pwd`:/mnt/ alpine:3.7 rm -rf /mnt/build
Cleaning up disk space on a jenkins host/slaves running docker as a CI tool is mandatory, I hate running remote scripts in cron jobs, so, I’ve made a small docker image cleaner container instead of executing those hateful scripts that runs in the background and cleans up images. To run it just use
sudo docker run \
-e TIME_INTERVAL=1h \
-e FILTER=alpine:latest \
-v /var/run/docker.sock:/var/run/docker.sock -d omerha/docker-image-cleaner:latest