This page has been moved to docs.fsfe.org with the rest of the sysadmin documentation.
Description
The goal of this document is to describe the recommended way of deploying a service to one of our container servers with Docker. With this procedure, the service is automatically deployed and updated on it. For this documentation we assume that the service is simple enough to fit in one Docker container and that the service is accessible through HTTP.
NOTE: Please make sure to follow the process for new services. Setting up a Docker container or VM is just the technical part but in order to make the FSFE's technical infrastructure clear and maintainable, we need proper communication and documentation.
General Idea
We have multiple servers that host our containers (`cont[1-9].(noris|plutex).fsfeurope.org) with a rootless Docker daemon. They are managed via the Ansible playbook container-server.
We have a Continuous Integration System, Drone, that builds and runs containers locally on it (Drone's deployment code). Drone gets the commits from the Git server, each commit triggers the creation of a "build" container, which in turn builds and runs containers with docker-compose. The reverse proxy (Caddy with https://git.fsfe.org/fsfe-system-hackers/docker2caddy/docker2caddy) running on each host watches containers creation and create virtual hosts dynamically to route the HTTP traffic to the containers. Here is how it works:
Drone gets a message from Gitea (the git server) via a web hook. That can be commit, a tag, a push, etc. Drone sees the .drone.yml file of the repository the event is coming from, and execute the instructions in it.
The instructions in .drone.yml spin up an isolated Docker container in which Drone will execute the given steps. Eventually, each one will end with docker-compose up --build -d, building the Docker image and spinning up the container.
docker-compose builds images and run container. As the container is "Docker in Docker", it is attached to the Docker daemon from the crontainer server directly, outside of the container.
With the provided ports and labels in docker-compose, docker2caddy detects the new docker container and generates a Caddy virtual host config. Caddy acts a reverse proxy and also generates a TLS certificate.
Below are the configuration needed to deploy the application container on a test server.
Note: Example Repo
Find the full example repo here.
Dockerfile
The Dockerfile that builds the service should have 4 main items:
Selection of the base image with a FROM statement.
- The actual build of the service. Install the service dependencies and the service itself
A CMD statement so the service is run automatically when the container starts.
An example Dockerfile is provided below:
# Define base image FROM bitnami/python:3.10 # Put webserver script in place COPY webserver.py /app/ # Run webserver, open for all outside requests CMD ["python3", "/app/webserver.py", "0.0.0.0:12345"]
Ports Question
The webserver inside the Docker container will listen on port 12345, as defined in the last line. This can be different in your setup.
Here is the Dockerfile reference.
Docker compose file
Next, we need to create an docker-compose file. The docker-compose.yml file is responsible for building the images (with the Dockerfile) and running the containers. To run the container, there are a number of options, most commonly:
- The docker volumes. This specifies the parts of the container that will be shared as mount points with the docker host. Usually this is for the container's data directories.
- The environment variables.
- The published and mapped ports. This is used by the reverse proxy container to route the traffic to the correct ports inside the service's container.
An example is provided below:
version: "3" services: webpreview: container_name: minimaldocker build: . image: minimaldocker restart: always # Reverse Proxy ports: - "8880:12345" labels: proxy.host: "minimal-docker.fsfe.org" proxy.port: "8880"
Ports Question
Why do we use port 8880? To avoid conflicts of ports exposed to the host, we use a scheme to define the port numbers. First, we get the unique ID of the repo via the API, in this case 888. Then, we number the required ports starting with 0. So if you start two containers opened to the outside, your ports would be "8880" and "8881".
Here is the docker-compose file reference for version 3.
Drone configuration
Drone is the Continuous Integration and Continuous Delivery software we use. We need to tell Drone to build the image and run the container.
Add this file to your project and call it .drone.yml:
--- kind: pipeline name: default type: docker steps: - name: reuse image: fsfe/reuse:latest - name: deploy image: docker/compose:1.29.2 environment: # Environment variables necessary for rootless Docker XDG_RUNTIME_DIR: "/run/user/1001" DOCKER_HOST: "unix:///run/user/1001/docker.sock" volumes: # Mounting Docker socket of rootless docker user - name: dockersock path: /run/user/1001/docker.sock commands: - docker-compose -p minimaldocker up --build -d when: branch: - main event: - push - tag - deployment # Define the docker host ("drone runner node") on which this will be executed node: cont: test volumes: # Define Docker socket of rootless docker user - name: dockersock host: path: /run/user/1001/docker.sock
This uses the docker/compose image to run the docker-compose file inside a container. As the image is basically "Docker in Docker" and has the Docker socket of the user running the rootless Docker daemon (/run/user/1001/docker.sock) shared as a mountpoint, it controls the docker daemon directly on the container server, outside of the container. The command docker-compose up -d --build means that docker-compose will build the images, run the containers in detached mode, and exit.
Also note the node section. Here, we define on which drone runner node the whole process will run, and eventually on which container server the container will be deployed.
Activate Drone job
In the https://drone.fsfe.org, sync the repositories, search for your new repo, and activate it. This internally sets a web hook in Gitea that notifies Drone if something has changed. Obviously, this only works if the repo already exists!
Security Note
You have to make the repository Trusted in the repository's Drone settings to be allowed to mount the Docker socket. However, please choose at least one of these additional protection settings to avoid security flaws
Disable forks: Do not run any CI job started from a fork of the trusted repo. That implies that authorised System Hackers shall not contribute from forks to repos but create branches directly on the upstream repo.
Protected: Only run CI jobs when the signature of the .drone.yml file is verified. Authorised Drone users can sign the file via the drone CLI locally, e.g. drone sign fsfe-system-hackers/minimal-docker --save. If the signature does not match (so the file has been tampered with), the build has to be approved manually.
Commit
You can do a test commit to see if it deploys the container correctly.
git commit --allow-empty -m 'Trigger build'
You can also restart Drone builds, so no empty commits are necessary.
How to use Drone secrets
Drone secrets allow you to securely insert secrets in your Docker images.
Add the secret in the Drone configuration
Make sure that Drone is activated for the repository. Then, you can add secrets in the repo's Drone settings. In this example, we add the secret my_secret_key.
Add the secret to .drone.yml to the respective build step. That provides the environment variable MY_SECRET_KEY to be picked up by the commands running in this step.
steps: - name: Deploy container [...] environment: MY_SECRET_KEY: from_secret: my_secret_key
Add the secret in docker-compose as a build argument
Instead of providing only the build context, provide the arguments as well.
Change the line:
build: .
to:
build: context: . args: - MY_SECRET_KEY=${MY_SECRET_KEY:?err}
This will add the --build-arg flag to the docker build command. The :?err makes the build fail if the environment variable is missing as a safety net.
Use the secret in the Dockerfile
Then you can add the argument in the Dockerfile by adding this line:
ARG MY_SECRET_KEY=insecuredefault
MY_SECRET_KEY is considered as a environment variable for the image build and therefore can be used with $MY_SECRET_KEY.
Note: This environment variable will not exist in the image, it's only available during the build. If the secret shall be available as a environment variable inside the running container, do this in docker-compose.yml:
environment: - MY_SECRET_KEY=${MY_SECRET_KEY:?err}
How to use LDAP
If you want your container to authenticate users via the FSFE's LDAP, e.g. against a LDAP group, we usually use the groupsreader account in LDAP that has the permissions to also fetch the group memberships of a user.
To establish an encrypted connection with the LDAP server within a Docker container, you will just have to set the IP 10.200.64.9 as the LDAP server without any encryption. All traffic will be encrypted via innernet.
Examples for Docker containers and VMs using LDAP can be found with the ldap keyword in Git.
Monitoring
In order to monitor whether the address is available, add the virtual host in the Icinga2 configuration.
Dependency Management
We offer a bot keeping track of dependencies for you in Docker, docker-compose and Drone files: renovate. This is highly recommended!
To activate it for your repo, just add the topic renovate on your repo. It will get a pull request within a day asking you to configure it which you can just merge. Then, once a day, your repo will be checked for updates.