There are two components to builds.sr.ht: the job runner and the master server. Typically installations will have one master and many runners distributed on many servers, but both can be installed on the same server for small installations (though not without risk). We'll start by setting up the master server.
The master server is a standard sr.ht web service and can be installed as
such. However, it is important that you configure two Redis
servers - one that the runners should have access to, and one that they should
not. Insert connection details for the former into build.sr.ht's configuration
file under the
redis key. Each build runner will also need a local redis
instance running. In an insecure deployment (all services on the same server)
you can get away with a single Redis instance.
We suggest using an SSH tunnel to share the slave Redis instance between job runners and the master server, but you can use any method you prefer. If you use an SSH tunnel, you will likely want to use a reverse tunnel initiated from the master server, so the slaves are unable to SSH into the master server.
Let's start with a brief overview of the security model of builds.sr.ht. Because builds.sr.ht runs arbitrary user code (and allows users to utilize root), it's important to carefully secure the build environments. To this end, builds run in a sandbox which consists of:
We suggest you take similar precautions if your servers could be running untrusted builds. Remember that if you build only your own software, integration with other services could end up running untrusted builds (for example, automatic testing of patches via lists.sr.ht).
On each runner, install the builds.sr.ht-images and builds.sr.ht-worker packages.
Create two users, one for the master and one for the runners (or one for each runner if you prefer). They need the following permissions:
If you are running the master and runners on the same server, you will only be able to use one user - the master user. Configure both the web service and build runner with this account. Otherwise, two separate accounts is recommended.
Note: in the future runners will not have database access.
On the runner, install the
builds.sr.ht-images package (if building from
source, this package is simply the
images directory copied to
/var/lib/images), as well as docker. Build the docker image like so:
$ cd /var/lib/images $ docker build -t qemu -f qemu/Dockerfile .
This will build a docker image named
qemu which contains a statically linked
build of qemu and nothing else.
genimg script is provided for each image which can be run from a working
image of that guest to produce a new image. You need to manually prepare a
working guest of each image type (that is, to build the Arch Linux image you
need a working Arch Linux installation to bootstrap from). Then you can run
genimg to produce the disk image. You should read the genimg
script to determine what dependencies need to be installed before it can be
run to completion.
The directory structure for bootable images should have the format images/$distro/$release/$arch/ with the root.img.qcow2 file within the $arch directory.
build.yml file is also provided for each image to build itself on your
build infrastructure once you have it set up, which you should customize as
necessary. It's recommended that you set up cron jobs to build fresh images
frequently - a script at
contrib/submit_image_build is provided for this
Note: You will need nested virtualization enabled in order to build images
from within a pre-existing build image (i.e. via the
build.yml file). If you
run into issues with
modprobe kvm_intel within the genimg script, you can
fix this by removing the module and then re-inserting it with
kvm_intel.ko nested=1 in the directory containing the kernel module.
If you require additional images, study the
control script to understand how
the top-level boot process works. You should then prepare a disk image for your
new system (name it
root.img.qcow2) and write a
functions file. The only
required function is
boot, which should call
_boot with any additional
arguments you want to pass to qemu. If your image will boot up with no
additional qemu arguments, this function will likely just call
_boot. You can
optionally provide a number of other functions in your
functions file to
enable various features:
installfunction with the following usage:
install [ssh port] [packages...]
add_repository [ssh port] [name] [source]. The
sourceis usually vendor-specific, you can make this any format you want to encode repo URLs, package signing keys, etc.
In order to run builds, we require the following:
10.0.2.2. Don't forget to configure DNS, too.
buildto log into SSH with, preferrably with uid 1000
Not strictly necessary, but recommended:
poweroff_cmdto a command we can SSH into the box and use to shut off the machine. If you don't, we'll just kill the qemu process.
sanity_checkfunction which takes no arguments, but boots up the image and runs any tests necessary to verify everything is working and return a nonzero status code if not.
You will likely find it useful to read the scripts for existing build images as
a reference. Once you have a new image, email the scripts to
we can integrate them upstream!
/etc/sr.ht/builds.ini configuration file similar to the one you wrote
on the master server. Only the
[builds.sr.ht] sections are
required for the runners.
images should be set to the installation path of
your images (
buildlogs should be set to the path where
the runner should write its build logs (the runner user should be able to create
files and directories here). Set
runner to the hostname of the build runner.
You will need to configure nginx to serve the build logs directory at
http://RUNNER-HOSTNAME/logs/ in order for build logs to appear correctly on the
Once all of this is done, make sure the worker is compiled (with go 1.11 or
later) by running
go build in the worker/ directory, start the
builds.sr.ht-worker service and it's off to the races. Submit builds on the
master server and they should run correctly at this point.