This document covers the configuration process for builds.sr.ht.

Security Model

Let's start with a brief overview of the security model of builds.sr.ht. Since builds.sr.ht runs arbitrary user code (and allows users to utilize root), it's important to carefully secure the build environments.

To that end, our build jobs run in a sandbox that consists of:

  • A KVM virtual machine (via QEMU), which
  • runs inside of an otherwise empty Docker image, which
  • runs as an unprivledged user on
  • a server that is physically separate from anything important, uses its own isolated Redis instance, and has minimal database access.

We suggest you take similar precautions if your servers may run untrusted builds.

Warning: Even if you only build your own software, integration with other services may cause you to run untrusted builds (e.g. automatic testing of patches via lists.sr.ht).

Master Server

Web Service

The master server requires two Redis servers – one that runners should have access to, and one that they should not have access to. For the former, insert connection details into build.sr.ht's configuration file under the redis key.

Each runner also requires a local Redis instance running.

Note: In a deployment where all services are on the same server, running only trusted builds, you can get away with a single Redis instance.

Database

Create two users, one for the master and one for the runners (or one for each runner if you prefer). They need the following permissions:

  • master should have ownership over the database and full read/write/alter access to all table
  • runner should have read/write access to the job, artifact and task tables, and read access to the user and secrets tables.

If you are running the master and runners on the same server, you will only be able to use one user — the master user. Configure both the web service and build runner with this account. Otherwise, two separate accounts is recommended.

Note: in the future runners will not have database access.

Install images

On the runner, install the builds.sr.ht-images package (if building from source, this package is simply the images directory copied to /var/lib/images), as well as docker. Build the docker image like so:

$ cd /var/lib/images
$ docker build -t qemu -f qemu/Dockerfile .

This will build a docker image named qemu which contains a statically linked build of qemu and nothing else.

Bootstrapping our images

A genimg script is provided for each image which can be run from a working image of that guest to produce a new image. You need to manually prepare a working guest of each image type (that is, to build the Arch Linux image you need a working Arch Linux installation to bootstrap from). Then you can run the provided genimg to produce the disk image. You should read the genimg script to determine what dependencies need to be installed before it can be run to completion.

The directory structure for bootable images should have the format images/$distro/$release/$arch/ with the root.img.qcow2 file within the $arch directory.

A build.yml file is also provided for each image to build itself on your build infrastructure once you have it set up, which you should customize as necessary. It's recommended that you set up cron jobs to build fresh images frequently — a script at contrib/submit_image_build is provided for this purpose.

Note: it is recommended that you modify our build.yml files to suit your instance's needs, then run it on our hosted builds.sr.ht instance to bootstrap your images. This is the fastest and most convenient way to bootstrap the images you need.

Note: You will need nested virtualization enabled in order to build images from within a pre-existing build image (i.e. via the build.yml file). If you run into issues with modprobe kvm_intel within the genimg script, you can fix this by removing the module and then re-inserting it with insmod kvm_intel.ko nested=1 in the directory containing the kernel module.

Creating new images

If you require additional images, study the control script to understand how the top-level boot process works. You should then prepare a disk image for your new system (name it root.img.qcow2) and write a functions file. The only required function is boot, which should call _boot with any additional arguments you want to pass to qemu. If your image will boot up with no additional qemu arguments, this function will likely just call _boot. You can optionally provide a number of other functions in your functions file to enable various features:

  • To enable installing packages specified in the build manifest, write an install function with the following usage: install [ssh port] [packages...]
  • To enable adding third-party package repositories, write an add_repository function: add_repository [ssh port] [name] [source]. The source is usually vendor-specific, you can make this any format you want to encode repo URLs, package signing keys, etc.

In order to run builds, we require the following:

  • The disk should be able to boot itself up, make sure to install a bootloader and set up partitions however you like.
  • Networking configured with IPv4 address 10.0.2.15/25 and gateway 10.0.2.2. Don't forget to configure DNS, too.
  • SSH listening on port 22 (the standard port) with passwordless login enabled
  • A user named build to log into SSH with, preferrably with uid 1000
  • git config setting user.name to builds.sr.ht and user.email to builds@sr.ht
  • Bash (temporary — we'll make this more generic at some point)

Not strictly necessary, but recommended:

  • Set the hostname to build
  • Configure NTP and set the timezone to UTC
  • Add the build user to the sudoers file with NOPASSWD: ALL
  • In your functions file, set poweroff_cmd to a command we can SSH into the box and use to shut off the machine. If you don't, we'll just kill the qemu process.
  • It is also recommended to write a sanity_check function which takes no arguments, but boots up the image and runs any tests necessary to verify everything is working and return a nonzero status code if not.

You will likely find it useful to read the scripts for existing build images as a reference. Once you have a new image, email the scripts to ~sircmpwn/sr.ht-dev@lists.sr.ht so we can integrate them upstream!

Additional configuration

Write an /etc/sr.ht/config.ini configuration file similar to the one you wrote on the master server. Only the [sr.ht] and [builds.sr.ht] sections are required for the runners. images should be set to the installation path of your images (/var/lib/images) and buildlogs should be set to the path where the runner should write its build logs (the runner user should be able to create files and directories here). Set runner to the hostname of the build runner. You will need to configure nginx to serve the build logs directory at http://RUNNER-HOSTNAME/logs/ in order for build logs to appear correctly on the website.

Once all of this is done, make sure the worker is compiled (with go 1.11 or later) by running go build in the worker/ directory, start the builds.sr.ht-worker service and it's off to the races. Submit builds on the master server and they should run correctly at this point.

For SSH access to (failed) builds you will need to install git.sr.ht and configure [git.sr.ht::dispatch] for buildsrht-keys.

About this wiki

commit 64dd454d025e91c76405cd1d04f51ea8e7a4f6a7
Author: wheezard <90904039+wheezard@users.noreply.github.com>
Date:   2024-09-07T05:46:36+04:00

man: fixed a typo (missing closing parenthesis)
Clone this wiki
https://git.sr.ht/~sircmpwn/sr.ht-docs (read-only)
git@git.sr.ht:~sircmpwn/sr.ht-docs (read/write)