One of the principles of a continuous delivery pipeline is to use the same compiled artefact that is going to be deployed throughout the pipeline.
The reason behind this is that there’s a risk that if the artefact is tested and then rebuilt or changed, then the new artefact will be different to one that’s been proven to work and thus, could fail or work in unverified ways.
An example of this would be to take a NodeJS app that’s tested on one CD worker and then deployed on a separate CD worker using the following steps:
- CD worker one runs
npm installto install libraries, let’s say
usefulLib@2.0.0is installed in the process
- CD worker one runs
npm run testand the tests all pass — Yay the app is working!
- CD worker one passes it’s stage and CI worker two picks up the next stage
- CD worker two doesn’t have the
node_modulesCD worker one had installed so it runs
npm installagain but now pulls
usefulLib@2.0.1as an update was pushed and the dependency wasn’t pinned
The fun happens when it turns out
usefulLib@2.0.1 introduces a new bug and this causes a failure in the app when it runs in production.
Lots of time is lost trying to trace down what went wrong and the test stage results are all green but on re-running the pipeline they start to fail, so then the test’s reliability are questioned and the team don’t have as much faith in the value they bring.
These types of failures become particularly hard to track down when it’s not a direct dependency that isn’t pinned.
In my current project we use a Nexus instance to mirror NPM and there’s been a number of times that it seemed like every hour one of the dependencies that Jest used was having a patch release pushed. As they didn’t use explicit versions it would try to bring in a new version of that library, only for it to not exist, being that our Nexus repo hadn’t updated with that version yet.
Why configuration test?
One of the most common means of creating an artefact is to create a Docker container.
Docker containers not only allow you to have your code as an artefact but they allow you to have the entire environment packaged into it, greatly increasing the reliability of the testing carried out against the application.
A standard method of deploying an application with Docker is to find a base image that can handle the language being used, copying the files over into the container and then using an entry point that runs the application.
While the majority of the time it’s easy enough to catch issues with the container during the build phase by looking at the logs, it’s always better to have checks in place that can halt the pipeline if the container isn’t created correctly.
There are a few tools out there to help with this. They provide additional functionality to test frameworks to verify:
- Packages are installed and the correct version is installed
- Files exist, contain the correct values and have the correct permissions
- Services exist, are in the correct state (running, stopped) and can only be run by the correct user groups
- Ports are opened and applications are listening on them
These frameworks aren’t limited to just Docker and will generally support testing the host machine, virtual machines and Docker images.
In my previous team we used ServerSpec, a RSpec based configuration testing tool which helped us verify the different levels (OS, application and client configuration) of our Dockerised app were set up correctly.
However as the project I’m currently working on in my spare time is using Python I decided to use TestInfra, which adds configuration testing functionality to PyTest.
Using TestInfra with Docker
TestInfra supports a number of host types and platforms including Docker but the only documentation of this is an example test case in the Examples section of the project’s docs.
In order to test Docker containers with TestInfra a PyTest fixture needs to be set up. This fixture will run the a Docker container and set up the protocol for interacting with that running instance.
The example in the TestInfra documentation is pretty basic but you can use it with Python's
unittest library in order to access better assertions and test lifecycle functionality.
Once you’ve got access to the Docker container in your test you can then use a number of TestInfra’s modules to check files, packages, sockets and more in your tests.
An example test suite — Checking Python runtime
In my current hobby project I am building a Discord bot that requires Python 3.7 to run as well as two libraries — Discord.py and Pokedex.py
In order to access the PyTest fixture used for setting up TestInfra from within a
unittest.TestCase subclass you need to create a
You then access the fixture by using the
pytest.mark.usefixtures decorator on your
TestCase subclass which will the add
host as a class variable (accessible via
In the example test cases above I’m checking that:
requirements.txtfile was copied into the image correctly
requirements.txtfile has both libraries needed to run the bot in it
pipinstance used in the container is returning the correct versions of both libraries to verify they were installed correctly
- The correct version of Python is run as the discord.py version I am using requires Python 3.7 and my Docker entry point uses the
A side note on the Python version check is to be careful when using the
package module TestInfra provides, as this will use the underlying OS’s package manager to report the version installed.
I’m using the
python:3.7 Docker image as my base which uses
dpkg for package management and the result I got from running
self.host.package('python') was 3.5, although the
python command was a symlink to Python 3.7.
Integrating configuration testing into your CD pipeline
Configuration testing checks that the Docker container (or environment the application will be deployed to if not using containerisation) is configured correctly and as such should be used as exit criteria for that phase in your CD pipeline.
I’m using Travis to build and deploy my bot’s Docker container to ECS. In order to make things easier for myself I’ve created a Makefile that has four stages:
- Build the Docker image
- Test the built Docker image
- Publish the Docker image to DockerHub
- Update my ECS task-definition with the new Docker image version and deploy
I then use two stages that combine the build and test (to be run as part of my CI) and the publish and deploy (to be run on a version release).
The end result is a really clean
.travis.yml file that is easy to understand.
If you’re building Docker images for your deployments and you’re not running checks against the structure of the compiled Docker image then TestInfra (or ServerSpec) can help save you hours of debugging configuration issues.
Adding the checks into your CI/CD pipeline isn’t hard especially when you something like a Makefile to abstract the building and testing phases into one stage.