Using Docker as a Build Environment

Docker is a double-edged sword. As a program, it practically oozes usefulness, marrying the best aspects of virtual machines with the modularity and portability of community repositories like NPM or Crates. The latter aspect brings with it all of the bloat and danger that comes from blindly injecting third-party dependency chains into a project, but it isn't strictly a requirement to use the technology carelessly.

Web site developers adopted Docker early and have remained its most vocal proponents, but Docker can be used for much more than running LAMP stacks. The trick is thinking bigger.


Like a proper virtual machine, a Docker container more or less encapsulates the sum total of a virtual environment, and that container is kept more or less separate from the host machine. With dependencies nice and isolated, Docker makes it possible to run an Ubuntu-based image under MacOS, a Void-based image under Debian, or even an Arch-based image under Arch. This is why, for example, so many web developers talk about Docker LAMP stacks; it is pretty much the only way to ensure what you're seeing on your MacBook Pro will behave similarly once the site is moved to its production server running something vastly different like CentOS.

But there are a few key differences between Docker containers and virtual machines.

First and foremost, size. Container images are usually incredibly slim, stripped of anything not needed for the particular containerized context, while virtual machines are seeded from standard installation media, requiring 10-15GB of free space. Containers can also be shared as small Dockerfile build scripts which are more like recipes than the actual meal.

Another big difference is persistence, or lack thereof. Docker images are set in stone at the time they are built. Kind of like a LiveCD, each time you connect, you get a fresh instance. You can noodle all you like, but once the connection closes, everything you did will vanish. This means no matter how many times you open the door, the Clean Room will always be clean! Similarly, you can run multiple instances of the same Docker container at the same time and they won't mess with each other.

Taken together, the ability to cobble together arbitrary environments, isolated from the host machine and stateless, in a compact and portable way, you've got yourself the makings of a Clean Room.


In a previous article, we broke down the process for repackaging the Debian Backports edition of Nginx (frozen in the 1.14 branch) to enable TLS 1.3 (which isn't present in Stretch's old copy of OpenSSL). To summarize, there were basically four steps:

  1. Install a shitload of build dependencies;
  2. Download the package sources;
  3. Tweak the package sources;
  4. Build and install the modified package;

To keep things simple, that article assumed the package was being built on the host machine. That's fine for a beginner's tutorial, but in general you don't want to do non-web-things on a web server, especially things like installing hundreds of packages that won't be needed ever again (at least until the next rebuild) and tying up the CPU with heavy computational tasks for who knows how long. Haha.

So let's take another stab at the task, this time using a Docker container as our build environment, running everything from some unrelated machine.

Building the Build Image.

The first thing we need to do is write a Dockerfile to get us to a workable Clean Room state. We'll start by importing debian:stretch — i.e. the OS used by the server — and make sure as many "expected" tools are installed as possible. Things like git or vim can always be installed later, but if they aren't part of the Docker image, they'll have to be reinstalled with every connection.

# FILE: Dockerfile

FROM debian:stretch

# Install a bunch of build shit.
RUN	apt-get update \
	&& apt-get install -y \
		apt-transport-https \
		aria2 \
		bison \
		build-essential \
		cmake \
		curl \
		flex \
		git \
		libfl-dev \
		libgmp-dev \
		libmpc-dev \
		libmpfr-dev \
		libz3-dev \
		subversion \
		sudo \
		unzip \
		vim \

Before we continue, we need to update the APT sources to:

# FILE: sources.list

deb stretch main
deb-src stretch main
deb stretch/updates main
deb-src stretch/updates main
deb stretch-updates main
deb-src stretch-updates main
deb stretch-backports main
deb-src stretch-backports main
deb stretch main

Dockerfiles get cluttered fast, so save the above to a file called "sources.list", and then continue the Dockerfile with:

# FILE (cont): Dockerfile
# Update APT sources from our local copy.
COPY	sources.list /etc/apt/sources.list

# Run updates again.
RUN	aria2c -o /etc/apt/trusted.gpg.d/php.gpg -x1 \
	&& apt-get update \
	&& apt-get dist-upgrade -y \
	&& apt-get build-dep -y gcc \
	&& apt-get build-dep -y clang

We had to wait to update the sources.list until this stage because Debian does not support repositories with https connections out of the box. But now that we've done that, we want to re-run updates (just in case), and now seems like as good a time as any to grab the build dependencies for gcc and clang, which the build-dep subcommand fetches for us.

Oh, did I forget to mention? Rather than stick with Debian's elderly versions, we're going to build gcc and clang from source so we can benefit from the latest and greatest! Compiling compilers from source takes fucking forever, so we want to handle that within the image itself so that our Clean Room has it ready-to-use when we boot to it. Speaking of, let's build some builders!

# FILE (cont): Dockerfile
# Install Ninja.
RUN	aria2c -o -x2 \
	&& unzip \
	&& rm \
	&& mv ninja /usr/bin/ninja

# Build gcc 8.2.0.
RUN	cd /tmp \
	&& aria2c -o gcc.tar.gz -x2 \
	&& tar xf gcc.tar.gz \
	&& cd /tmp/gcc-gcc-8_2_0-release \
	&& contrib/download_prerequisites \
	&& mkdir build \
	&& cd build \
	&& ../configure -v --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu --prefix=/usr/local/gcc-8.2 --enable-checking=release --enable-languages=c,c++,fortran,go --disable-multilib --program-suffix=-8.2 \
	&& make -j4 \
	&& make install \
	&& cd /tmp \
	&& rm -rf /tmp/gcc-gcc-8_2_0-release /tmp/gcc.tar.gz

# Note: the -j4 above refers to the number of CPU threads to
# use. Adjust the 4 to the highest value your system can handle.

# Use our gcc as THE gcc.
RUN	update-alternatives --install /usr/bin/gcc gcc /usr/local/gcc-8.2/bin/gcc-8.2 100 --slave /usr/bin/g++ g++ /usr/local/gcc-8.2/bin/g++-8.2

# Build clang/llvm 7.0.1 (against our gcc 8.2.0).
RUN	cd /tmp \
	&& aria2c -o llvm.tar.xz -x2 \
	&& tar xf llvm.tar.xz \
	&& mv llvm-7.0.1.src llvm \
	&& cd /tmp/llvm/tools \
	&& aria2c -o clang.tar.xz -x2 \
	&& tar xf clang.tar.xz \
	&& mv cfe-7.0.1.src clang \
	&& cd /tmp/llvm/tools/clang/tools \
	&& aria2c -o tools.tar.xz -x2 \
	&& tar xf tools.tar.xz \
	&& mv clang-tools-extra-7.0.1.src clang-tools-extra \
	&& cd /tmp/llvm/projects \
	&& aria2c -o rt.tar.xz -x2 \
	&& tar xf rt.tar.xz \
	&& mv compiler-rt-7.0.1.src compiler-rt \
	&& mkdir /tmp/llvm/build \
	&& cd /tmp/llvm/build \
	&& cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DGCC_INSTALL_PREFIX=/usr/local/gcc-8.2/lib/../lib64 -DCMAKE_INSTALL_PREFIX=/usr/local -G "Ninja" .. \
	&& ninja \
	&& ninja install \
	&& cd /tmp \
	&& rm -rf /tmp/llvm /tmp/llvm.tar.xz

We're almost done! All we need to do is tie off a few loose ends and make any final tweaks to the image environment before we go to build it. Because gcc and clang are such big build tasks, we want to try to keep anything light and prone to change after so that rebuilding doesn't trigger a rebuild of our builders too.

For our purposes, we just want to copy over a .bashrc and .profile to make CLI connections a little prettier, and set the working directory to a volume we'll use for easy host↔Docker file sharing.

# FILE (cont): Dockerfile
# If you have a preferred BASH setup, uncomment the following:
# COPY	.profile /root/.profile
# COPY	.bashrc /root/.bashrc

# Enable syntax highlighting in vim.
RUN	echo "syntax on" > /root/.vimrc

# Set the working directory to /mnt.

Building and Running the Docker Clean Room.

At this point, you should have a file called Dockerfile, a sources.list file, and maybe a .bashrc and/or .profile. In your terminal, cd to the directory containing those files and run:

# Build it with:
docker build -t clean/stretch:latest -f Dockerfile .

# Run it with:
docker run --rm -v /full/path/to/host/share/folder:/mnt -it --name clean_stretch clean/stretch /bin/bash

You only need to build the image once. After that, you can just execute the docker run… bit to open a BASH session in your clean environment!

Note: The above run example contains a path-map between /full/path/to/host/share/folder on the host machine and /mnt on the Docker. You can use this to easily pass files between the host and clean room, such as compiled binaries. If you're doing something else, like pushing to git or using scp from within the Docker session, you can skip that option.

Back to Nginx

With our Docker clean room built, we can return to our Nginx example. That process would now look like:

# Connect to Docker.
docker run --rm -v /here/folder:/mnt -it --name clean_stretch clean/stretch /bin/bash

# Remember, Nginx isn't installed here, so we might have more
# dependencies to fetch:
apt-get update && apt-get build-dep -t stretch-backports nginx-full

# Fetch the sources:
mkdir /tmp/nginx && cd /tmp/nginx && apt-get source -t stretch-backports nginx-full

# Edit the build rules:
vim debian/rules

# Add "--with-openssl-opt=enable-tls1_3 \" to the common rules
# right after "--with-ld-opt". If you have other tweaks, tweak
# them now, then save and exit vim.

# Build Nginx:
dpkg-buildpackage -b

# When it's done, copy or move the relevant .deb files to /mnt,
# then look for them on your host machine. Copy them to somewhere
# more permanent before shutting down the Clean Room.

# Disconnect from Docker like any other terminal connection:
Josh Stoik
27 December 2018
Previous Enable TLS 1.3 in Nginx on Debian Stretch
Next PSA: Ubuntu Eoan Upgrade Process Snapifies Some Apps