Chapter 8. A Gentoo build server

Table of Contents

Building a build server
Setup build host
Enabling the web server


To support multiple Gentoo Linux hosts, we will be creating a build server. But just having one server build packages isn't sufficient - we need to be able to

  • have separate build environments based on a number of different "templates" that will be used in our architecture.

    Because of Gentoo's flexibility, we will have systems that have different USE flags, CFLAGS or other settings. All these settings however need to be constant for the build service to work well. If deviations are small, it is possible to use the majority of the build server and still use local compilations, but in our architecture, we will assume that the build server offers the binaries for all systems and that systems by themselves usually do not need to build code themselves.

  • support phased builds across the architecture

    Our architecture supports both production and non-production systems. The non-production systems are used to stage updates and validate changes before pushing them to production (this for quality assurance as well as a better service availability). This does mean that we need to be able to "fix" the repositories once we deem them stable enough, and then work on a new one.

  • provide access to stable tree

    Any client that wishes to use the build server will need access to that build servers' portage tree and overlays (if applicable). As the tree is only needed in case of updates, there is little point in having this tree available on the clients at all times.

  • limit host access to authorized systems

    We want to limit access to the built packages to authorized systems only. This both to prevent "production" systems to accidentally take in non-production data, as well as ensure that only systems we manage ourselves are accessing the system. By using host-based authorizations, we can also follow up on systems more easily (such as logging).

and this is just a subset of the requirements that are involved in the build server.

When you make a simple build server design, you immediately see that a multitude of other components are needed. These include

  • a DNS server to offer round-robin access to the build servers

  • a certificate server to support management of certificates in the organization

  • a high-available NFS server to host the built packages on

  • a scheduler to execute builds

Some of these have been discussed already, others can be integrated nicely afterwards.

Building a build server

Setting up a build server requires building the server. It comes to no surprise that we will not be benefiting from a build server initially here, but once the build infrastructure is set up, we might be able to use the existing build results (which will be stored on the NFS server) to recreate the build infrastructure.

Setup build host

The build host is most likely a dedicated (i.e. non-virtualized) system because it will be very CPU and memory hungry and can use all the resources it can have. As a result though, it means that our build host(s) should be set up in a multi-active setup. Each buildhost then runs a web server (like lighttpd) which hosts the built package locations as the document root of the virtual hosts. To support the multiple build profiles, we will use different IP-based virtual hosts and control access to them through a certificate-based access list to only allow authorized hosts to download packages from the server. I opt to use web-based disclosure of packages instead of network mounts as they offer more control, easier logging, etc.

The build server itself will use a simple configuration scheme and a few scripts that are either triggered manually or scheduled centrally. These scripts will pull in the necessary changes (be it updates in overlays or in the main portage tree) and do full system builds. The resulting packages will be stored in NFS-mounted locations (where the NFS server is set up high available).

Build profile configuration

Let's first start with the build profile configuration. We will use catalyst, Gentoo's release management tool that is used to generate the weekly releases. With some tweaks, it can be easily used to support our build server.

Our build setup will do something akin to the following:

+- Fetch new portage tree snapshot
+- For each build profile:
   +- Put snapshot in <storedir>/snapshots
   +- Build stage3 (using catalyst)
   `- Build grp (using catalyst)

Weekly (and before daily): 
`- For each build profile
   `- Update spec file to now point to most recent stage3

The setup will assume that a working stage3 tarball already exists and is made for further use. If that is not the case, you will need to get a fresh stage3 first and put it in <storedir>/builds. From that point onwards, this stage3 is reused for the new stage3. Binary packages are kept as those will be used for the binary host later. The grp build (which stands for Gentoo Reference Platform) will then build the remainder of binary packages.

So let's first install catalyst.

# emerge catalyst

Next, we make an initial configuration for catalyst. One set of configuration files matches one build profile, so you'll need to repeat the next steps for each build profile you want to support. We'll start with the generic catalyst configuration, use /srv/build as the base directory for all related activities under which each build profile will have its top level. In the next example, where you see "basic_bp", it is the name of this build profile.

# mkdir -p /srv/build/basic_bp/{portage,portage_settings,snapshot_cache,\
# cd /etc/catalyst
# cat catalyst-basic_bp.conf
options="autoresume metadata_overlay pkgcache seedcache snapcache"

Next, create the stage3.spec and grp.spec files. The templates for these files can be found in /usr/share/doc/catalyst-<version>/examples.

# cat stage3-basic_bp.conf
## Static settings
subarch: amd64
target: stage3
rel_type: default
profile: hardened/linux/amd64/no-multilib/selinux
cflags: -O2 -pipe -march=native
portage_confdir: /srv/build/basic_bp/portage_settings
## Dynamic settings (might be passed on as parameters instead)
# Name of the "seed" stage3 file to use
source_subpath: stage3-amd64-20120401
# Name of the portage snapshot to use
snapshot: 20120406
# Timestamp to use on your resulting stage3 file
version_stamp: 20120406

# cat grp-basic_bp.conf
(... similar as to stage3 ...)
grp/packages: lighttpd nginx cvechecker bind apache ...

The portage_confdir variable tells catalyst where to find the portage configuration specific files (all those you usually put in /etc/portage).

The source_subpath variable tells catalyst which stage3 file to use as a seed file. You can opt to point this one to the stage3 built the day before or to a fixed value, whatever suits youµ the best. This file needs to be stored inside /srv/build/basic_bp/catalyst_store/builds.

Copy the portage snapshot (here, portage-20120406.tar.bz2) and related files in /srv/build/basic_bp/catalyst_store/snapshots. If you rather create your own snapshot, populate /srv/build/basic_bp/portage with the tree you want to use, and then call catalyst to generate a snapshot for you from it:


The grSecurity chroot restrictions that are enabled in the kernel will prohibit catalyst from functioning properly. It is advisable to disable the chroot restrictions when building through catalyst. You can toggle them through sysctl commands.

# catalyst -c catalyst-basic_bp.conf -s 20120406

Next, we have the two catalyst runs executed to generate the new stage3 (which will be used by subsequent installations as the new "seed" stage3) and grp build (which is used to generate the binary packages).

# catalyst -c catalyst-basic_bp.conf -f stage3-basic_bp.conf
# catalyst -c catalyst-basic_bp.conf -f grp-basic_bp.conf

When the builds have finished, you will find the packages for this profile in /srv/build/basic_bp/catalyst_store/builds. These packages can then be moved to the NFS mount later.

Automating the builds

To automate the build process, remove the date-specific parameters and instead pass them on as parameters, like so:

# catalyst -c catalyst-basic_bp.conf -f stage3-basic_bp.conf \
    -C version_stamp=20120406

Next, if the grp build finishes succesfully as well, we can use rsync to:

  1. synchronize the binary packages from /srv/build/basic_bp/catalyst_store/packages/default/stage3-amd64-<version_stamp> to the NFS mount for this particular profile, as well as rsync the portage tree used (as these two need to be synchronized).

  2. copy the portage tree snapshot to the NFS mount for this particular profile, while signing it to support GPG-validated web-rsyncs.

  3. rsync the content of the portage_confdir location to the NFS mount

The combination of these three allow us to easily update clients:

  1. Update /etc/portage (locally) using the stored portage_confdir settings

  2. Update the local portage tree using emerge-webrsync (with GPG validation)

  3. Update the system, using the binaries created by the build server

  4. Update the eix catalogue

The following two scripts support this in one go (fairly small scripts - modify to your liking) although we will be using a different method for updating the systems later.

# cat autobuild

# Build Profile
# Time Stamp
TS="$(date +%Y%m%d)"
# Build Directory
# Packages Directory

catalyst -c /etc/catalyst/catalyst-${BP}.conf \
         -f /etc/catalyst/stage3-${BP}.conf \
         -C version_stamp=${TS} || exit 1
catalyst -c /etc/catalyst/catalyst-${BP}.conf \
         -f /etc/catalyst/grp-${BP}.conf \
         -C version_stamp=${TS} || exit 2
rsync -avug ${PD}/ /nfs/${BP}/packages || exit 3
cp ${BD}/catalyst_store/builds/portage-${TS}.tar.bz2 /nfs/${BP}/snapshots || exit 4
gpg --armor --sign \
    --output /nfs/${BP}/snapshots/portage-${TS}.tar.bz2.gpg \
    /nfs/${BP}/snapshots/portage-${TS}.tar.bz2 || exit 5
tar cvjf ${BD}/portage_settings /nfs/${BP}/portage_settings-${TS}.tar.bz2 || exit 6
# cat autoupdate

# Build Profile
# Edition
# Time Stamp to use

wget https://${BP}-${ED}${BP}/${ED}/portage_settings-${TS}.tar.bz2
tar xvjf -C /etc/portage portage_settings-${TS}.tar.bz2
restorecon -R /etc/portage
emerge-webrsync --revert=${TS}
emerge -uDNg world

Promoting editions

The automated builds will do daily packaging for the "testing" edition. Once in a while though, you will want to make a snapshot of it and promote it to production. Most of these activities will be done on the NFS server though (we will be using LVM snapshots). The clients still connect to the build servers for their binaries (although this is not mandatory - you can host other web servers with the same mount points elsewhere to remove the stress of the build servers) but "poll" in the snapshot location (by using a different edition).

Enabling the web server

We will be using lighttpd as the web server of choice for the build server. The purpose of the web server is to disclose the end result of the builds towards the clients.

Installing and configuring lighttpd

First install lighttpd:

# emerge lighttpd

Next, configure the web server with the sublocations you want to use. We will be using IP-based virtual hosts because we will be using SSL-based authentication and access control. Since SSL handshakes occur before the HTTP headers are sent, we cannot use name-based virtual hosts. The downside is that you will need additional IP addresses to support each build target:          (IP1)  (IP2)           (IP3)   (IP4)

These aliases are used for the packages (and tree) for the build profiles "basic" and "ldap" (you can generate as many as you want). Each build profile has two "editions", one is called testing (and contains the daily generated ones), the other production (which is the tested one). You might create a "snapshot" edition as well, to prepare for a new production build.

# cat lighttpd.conf
var.log_root            = "/var/log/lighttpd"
var.server_root         = "/var/www/localhost"
var.state_dir           = "/var/run/lighttpd"

server.use-ipv6         = "enable"
server.username         = "lighttpd"
server.groupname        = "lighttpd"
server.document-root    = server_root + "/htdocs"         = state_dir + "/"

server.errorlog         = log_root + "/error.log"

$SERVER["socket"] == "IP1:443" {
ssl.engine             = "enable"
ssl.pemfile            = "/etc/ssl/private/basic-production-server.pem"            = "/etc/ssl/certs/production-crtchain.crt"
ssl.verifyclient.activate      = "enable"
ssl.verifyclient.enforce       = "enable"
ssl.verifyclient.depth         = 2
ssl.verifyclient.exportcert    = "enable"
ssl.verifyclient.username      = "SSL_CLIENT_S_DN_CN"
server.document-root    = "/nfs/build/production/basic"
server.error-log        = log_root + "/basic/error.log"
accesslog.filename      = log_root + "/basic/access.log"

$SERVER["socket"] == "1IP2:443" {
ssl.engine              = "enable"
ssl.pemfile             = "/etc/ssl/private/basic-testing-server.pem"             = "/etc/ssl/certs/testing-crtchain.crt"
ssl.verifyclient.activate      = "enable"
ssl.verifyclient.enforce       = "enable"
ssl.verifyclient.depth         = 2
ssl.verifyclient.exportcert    = "enable"
ssl.verifyclient.username      = "SSL_CLIENT_S_DN_CN"
server.document-root    = "/nfs/build/testing/basic"
server.error-log        = log_root + "/basic.testing/error.log"
accesslog.filename      = log_root + "/basic.testing/access.log"

The configuration file can be extended with others (it currently only includes the definitions for the "basic" build profile). As you can see, different certificates are used for each virtual host. However, the points to a chain of certificates that is only applicable to either production or testing (and not both). This ensures that production systems (with a certificate issued by production CA) cannot succesfully authenticate against a testing address and vice versa.

Managing lighttpd

The log files for the lighttpd server are stored in /var/log/lighttpd. It is wise to configure logrotate to rotate the log files. In the near future, we will reconfigure this towards a central logging configuration.



  • TODO