There are two components to the job runner and the master server. Typically installations will have one master and many runners distributed on many servers, but both can be installed on the same server for small installations (though not without risk). We'll start by setting up the master server.

Web service

The master server is a standard web service and can be installed as such. However, it is important that you configure two Redis servers - one that the runners should have access to, and one that they should not. Insert connection details for the former into's configuration file under the redis key. Each build runner will also need a local redis instance running. In an insecure deployment (all services on the same server) you can get away with a single Redis instance.

We suggest using an SSH tunnel to share the slave Redis instance between job runners and the master server, but you can use any method you prefer. If you use an SSH tunnel, you will likely want to use a reverse tunnel initiated from the master server, so the slaves are unable to SSH into the master server.

Security model

Let's start with a brief overview of the security model of Because runs arbitrary user code (and allows users to utilize root), it's important to carefully secure the build environments. To this end, builds run in a sandbox which consists of:

We suggest you take similar precautions if your servers could be running untrusted builds. Remember that if you build only your own software, integration with other services could end up running untrusted builds (for example, automatic testing of patches via

Package installation

On each runner, install the and packages.

Database configuration

Create two users, one for the master and one for the runners (or one for each runner if you prefer). They need the following permissions:

If you are running the master and runners on the same server, you will only be able to use one user - the master user. Configure both the web service and build runner with this account. Otherwise, two separate accounts is recommended.

Note: in the future runners will not have database access.

Install images

On the runner, install the package (if building from source, this package is simply the images directory copied to /var/lib/images), as well as docker. Build the docker image like so:

$ cd /var/lib/images
$ docker build -t qemu -f qemu/Dockerfile .

This will build a docker image named qemu which contains a statically linked build of qemu and nothing else.

Bootstrapping our images

A genimg script is provided for each image which can be run from a working image of that guest to produce a new image. You need to manually prepare a working guest of each image type (that is, to build the Arch Linux image you need a working Arch Linux installation to bootstrap from). Then you can run the provided genimg to produce the disk image. You should read the genimg script to determine what dependencies need to be installed before it can be run to completion.

The directory structure for bootable images should have the format images/$distro/$release/$arch/ with the root.img.qcow2 file within the $arch directory.

A build.yml file is also provided for each image to build itself on your build infrastructure once you have it set up, which you should customize as necessary. It's recommended that you set up cron jobs to build fresh images frequently - a script at contrib/submit_image_build is provided for this purpose.

Note: You will need nested virtualization enabled in order to build images from within a pre-existing build image (i.e. via the build.yml file). If you run into issues with modprobe kvm_intel within the genimg script, you can fix this by removing the module and then re-inserting it with insmod kvm_intel.ko nested=1 in the directory containing the kernel module.

Image-specific notes

Creating new images

If you require additional images, study the control script to understand how the top-level boot process works. You should then prepare a disk image for your new system (name it root.img.qcow2) and write a functions file. The only required function is boot, which should call _boot with any additional arguments you want to pass to qemu. If your image will boot up with no additional qemu arguments, this function will likely just call _boot. You can optionally provide a number of other functions in your functions file to enable various features:

In order to run builds, we require the following:

Not strictly necessary, but recommended:

You will likely find it useful to read the scripts for existing build images as a reference. Once you have a new image, email the scripts to ~sircmpwn/ so we can integrate them upstream!

Additional configuration

Write an /etc/ configuration file similar to the one you wrote on the master server. Only the [] and [] sections are required for the runners. images should be set to the installation path of your images (/var/lib/images) and buildlogs should be set to the path where the runner should write its build logs (the runner user should be able to create files and directories here). Set runner to the hostname of the build runner. You will need to configure nginx to serve the build logs directory at http://RUNNER-HOSTNAME/logs/ in order for build logs to appear correctly on the website.

Once all of this is done, make sure the worker is compiled (with go 1.11 or later) by running go build in the worker/ directory, start the service and it's off to the races. Submit builds on the master server and they should run correctly at this point.

About this wiki

commit 25c36722c582a6c9b2ef13869530ff377b565d98
Author: Drew DeVault <>
Date:   2019-10-22T20:16:58+00:00 update for new shell
Clone this wiki (read-only) (read/write)