A firmware project's build environment is an integral part of the project itself. Many engineers only consider the source code when thinking about their project, but how that source code is built and tested is just as important.
Here's an example of a standard build environment for a cortex-m project:
- arm-none-eabi-gcc - Compiler
- Make - Build tool for invoking the compiler
- CMake - Build manager for managing the project's build setting
- Python - For running scripts (e.g. creating a version file that isn't tracked by the VCS, running git hooks)
- Ceedling - Unit Testing
- protoc/nanopb - compiler and plugin for managing serialized data structures on embedded systems
To begin developing on this example project you would need everything listed above installed on your system. In theory that seems ok, but in practice it can be a huge burden.
The reason is that without even considering different OSes, everyone's host system is slightly different. This means that getting everything installed and working properly on a given system is not guaranteed to work. For example, let's say you have {% c-line %}arm-none-eabi-gcc v9.3.1{% c-line-end %} already installed on your host machine, but our example project requires {% c-line %}arm-none-eabi-gcc v7.2{% c-line-end %} Getting {% c-line %}v7.2{% c-line-end %} installed without breaking other things could be tricky.
Then there's the case of changes you might make to your host system that could unintentionally break your build system. For example maybe you just finished updating your OS, only to find out that it updated a bunch of dependencies and now your build system is totally borked. Suffice it to say, having a build system that is non-portable AND fragile, is not only bad for team efficiency, it's bad for morale.
Fortunately there is an elegant solution to the issue of how to share build environments called "containerization". Containerization is the idea that instead of installing applications directly onto your host machine, you install them into self-contained "virtual containers" that includes all the necessary dependencies needed for your application to run. This de-couples your application from your host machine, making it more robust to changes outside the container (like OS updates, etc). This might sound a lot like a virtual machine, but unlike a virtual machine, containers rely on the underlying operating system and don't require a hypervisor, making them much more lightweight and portable.
One platform that enables this technology is called Docker. Similar to git and distributed version control, Docker has become synonymous with software containerization, or "Docker-ization". Here is a more thorough description of what Docker is, and how it works: https://docs.docker.com/get-started/overview/
So how does containerization via Docker make life easier for embedded engineers? The answer is that with Docker everything you need to build and test your project can be stored in an easily shareable (and host-able) Docker Image.
Finally, here's what we get by moving our local build environment into a Docker Container:
- No more manual installation of software in order to build a project
- Eliminates the "But it builds on my computer" scenario once and for all
- Makes build and test automation easy because you only need to share a single Docker Image (which most modern build servers like GitHub Actions and BitBucket Pipelines will happily accept)
- You can say with >99% confidence that your build server, your co-worker, your other work computer is creating identical assets(aka .bin/.hex/.elf files) as your work machine as long as everyone is using the same Docker Image
If this sounds amazing (and why wouldn't it!) learn more about Docker here.