Categories
An approach to unit testing C code
Here at Katalix we're big fans of unit testing, both as an aid to development and as a way to reduce bugs.
Unlike many more modern languages, C doesn't bundle any support for testing. There are plenty of options out there for testing your C code, but there's not one clear and obvious approach to follow.
Because there's not an obvious "idiomatic C way" to approach unit testing, we've had to evolve our own methods.
In this article we take a look at how we go about unit testing a C project.
Design for test
Without doubt the largest impediment to unit testing any code, regardless of language, is architecture.
Projects with lots of global data, high degrees of coupling between components, and poor delineation of responsibilities are harder to test!
If you're in the unenviable position of having an existing codebase which isn't amenable to testing, the first challenge is to refactor the code to make testing possible.
Doing so is outside the scope of this article, but we recommend Michael Feather's excellent Working Effectively with Legacy Code as a good resource to get started.
We'll assume for the rest of this article that you have a either a new project where you can design to support testing from the start; or an existing project which is well-structured already.
When considering structure in a C program, here's what we're looking for:
- Code is divided up into logical modules which group related functionality together.
- Modules provide a header file containing an API (type definitions, data structures, and function prototypes) which is the primary interface for the rest of the program.
- Modules avoid the use of global or file-scoped data, preferring instead to have a state structure which is explicitly initialised and cleaned up by the caller of the module API.
- Modules hide internal implementation as much as possible.
- Module dependencies are injected during initialisation.
Code structured in this way is generally easy to test.
Testing a module's internals
It's better to test code behaviour rather than implementation. Implementation is subject to change. Behavior should remain the same.
This said, it is sometimes useful to test inside a module in order to exercise code in a way that would be difficult using the module API alone.
In these situations we make use of a minimal test framework inspired by MinUnit.
The entire framework, including a macro to define a main function, is defined in a single header file.
Tests can be embedded directly inside the module source file, excluded from the normal build using conditional compilation:
#ifdef ENABLE_XX_TESTS
#include "test/minunit.h"
/* tests are defined here */
#endif /* ENABLE_XX_TESTS */
The test application is built by adding a make rule building the module source and defining ENABLE_XX_TESTS in the pre-processor flags.
This allows us to test code which isn't visible to the rest of the application.
An alternative approach would be to break the internal code out as its own module, and then test at the newly created module API.
Testing a module's API
C has plenty of libraries implementing unit test frameworks.
We tend to favour CUnit since it's simple, minimal, and feature complete for what we need.
When testing a module's API we implement a CUnit suite in a dedicated C file for that module. That is then built into a test application using a shared C file of boilerplate code which defines the application main function.
This approach yields a test application per module, as opposed to a single application covering multiple modules.
Multiple test applications can be combined together using GNU Automake's test suite support, which I'll talk about a bit more later on.
An alternative approach here would be to implement each module's API tests as a separate suite of tests in one single monolithic test application.
Testing an application
You may feel that application-level tests are somewhat out of scope for a post talking about unit testing.
We don't necessarily disagree!
However, depending on how you choose to structure your tests, application-level tests could usefully form part of a unit test suite. We think it's reasonable to touch upon here at least.
Using C to implement application-level tests is often error prone and tedious. As such, the tooling used for testing at a module's API won't usually be a good fit for testing the application.
We much prefer using some higher level language to implement application level tests. At Katalix we've had a lot of success using Python for this.
Our approach is to implement a sandboxed environment for the application using Python classes which support running under a context manager. We then use the Python unittest module to implement tests using the sandbox environment.
Forming a test suite
As I mentioned earlier, our favoured approach for unit tests is to build lots of little test applications which we then glue together with GNU Automake's test suite feature.
Setting up Makefile.am for test support is trivial: just add the test application name to the TESTS variable, and Automake will do the rest for you! Tests can then be run using make check.
This approach works well as it provides a simple UI for a developer to run tests, and a consistent output indicating test success or failure.
It also generates a standard set of result indicators (the Automake .trs files) for a each of the test applications. These can be easily analysed to generate test summaries for inclusion in reports, for example.
Integrate memory analysis
We love the valgrind suite of tools.
For C and C++ coding in particular, where manual memory management is so important, valgrind is a fantastic aid to ensure common memory management bugs are avoided.
Valgrind is so useful that we integrate it into our test environment.
To do this, we use a pattern rule which matches test_app_name.vg and creates a script which runs the test app under valgrind. The test script calls out a canned set of valgrind arguments.
Here's an example:
noinst_PROGRAMS += module_x_test
module_x_test_SOURCES = src/module_x.c
module_x_test_SOURCES += test/module_x_test.c
module_x_test_SOURCES += test/test_app_boilerplate.c
TESTS += module_x_test module_x_test.vg
%.vg: %
@echo "#!/bin/bash -x" > $@
@echo -n "libtool e valgrind \
-v \
--leak-check=full \
--show-leak-kinds=all \
--track-origins=yes \
--suppressions=project_valgrind_suppressions.supp \
--error-exitcode=1 \
--errors-for-leak-kinds=all \
./$(patsubst %.vg,%,$@) " >> $@
@echo '$$@' >> $@
@chmod +x $@
The slightly strange-looking construct @echo '$$@' >> $@ injects the literal token $@ into the script, which effectively passes all the command-line arguments to the script on to the test application. If your test app accepts command line arguments, you can run the .vg script with the same command line arguments and it'll work in the same way. This is often helpful for debugging!
By defining the TESTS variable like this, make check will build the C unit test application module_x_test alongside the script module_x_test.vg. Both will be run as part of the test suite.
Although running both module_x_test and module_x_test.vg might seem like a waste of time, it can help to capture more bugs. Running the test application directly is more likely to find timing bugs, while the valgrind run finds memory bugs.
Keep tabs on test coverage
Unit tests are most useful when you understand how well (or not!) they exercise your code.
Keeping tabs on code coverage as you work helps to decide what tests to write, and may even help to inform how you design your modules.
For C projects, we use gcc's gcov tool, which can be combined with LCOV in order to generate nice HTML reports for easy analysis of test coverage.
Because LCOV and the underlying tool gcov require special build options, it's helpful to write a short script which automates the following steps:
- Running make clean to remove existing build outputs;
- Building the project using gcc, and adding -g -O0 --coverage to the CFLAGS, and -lgcov to the LDFLAGS. This adds the instrumentation to the built binaries which the coverage analysis tools depend upon;
- Running the unit tests using make check;
- Finding all the instrumentation output files in the directory tree, and running lcov on them to generate coverage information;
- Running the genhtml tool on the coverage information to produce the final report.
Conclusions
This post has explored our approach to unit testing a C project.
We've used this methodology on small to medium-sized code bases (~60k lines of code) with great success.
We hope you've found this post helpful, or at least food for thought. And if you're looking for someone to help you get unit testing up and running for your C project, please do drop us a line and we'd be happy to help!