Standing on the shoulders of giants

Whether we know it or not, we build things based on the work done before us. This is true in every aspect of life, and programming is no different.

Every few weeks, we (at Niteo) have a developer session where we (developers at Niteo) talk about new things we learned, cool libraries/tools we discovered, etc.
Last week Nejc talked about optimising Nix build for Pareto Security. If you use Nix to build your environment, I strongly suggest reading his post.

On EBN, we also have a long deployment time, and his talk got me thinking about Nix build on EBN. Usually, it takes around 30-40min for the EBN app to be deployed. Unfortunately, we didn’t measure Heroku deployments until this point, so I don’t have an exact number on this. What I do know is the size of the Docker image. It’s a whopping 1.95GB.

I know the EBN stack is big, but I had a feeling (based on Nejc’s blog post) that we also have something in our build that is causing such a large image. So I started digging, exactly like Nejc wrote in his blog post. First, by producing a local build that I use to feed into nix-store to analyse what was produced (again, see Nejc’s blog post for step-by-step instructions).

nix-store produced ~1.7k lines of dependencies. I was overwhelmed at first. I tried to represent the data visually, but it was even worse. I couldn’t wrap my head around it. After looking at the dependencies for a few hours, I took a break and resumed work the next day. Taking a break always helps, especially with a complex task where I need to “see” things.

I realised that I couldn’t solve this problem by looking at the dependencies graph because it’s too big. EBN has just too many dependencies and is way too complex to see what doesn’t belong in there.

The next thing I did was build the actual image, run the container and connect to it. I knew that the final image has required binaries copied from the first build, so if I can see which folders are the largest, I could go from there.

I ran du -hs /nix/store/* | sort -h, which showed me which folders take the most space. It was gcc-10.3.0 (177M) and glibc-locales-2.32 (215M). I didn’t doubt that we need gcc in the first step (when building the production environment), but I was almost certain that we don’t need it in the production environment’s second (final) image. The whole point of building a docker image in 2 steps is to create an image that contains only the things it needs to run the app.

After another few hours of testing, prototyping, looking at the code (mostly Dockerfile and default.nix), I saw gcc, locale, and glibcLocales in commonDependencies list in default.nix file.

This looked strange because we don’t have any C code. Maybe some other dependency needs them, but they probably have gcc listed in their dependency list. I also saw a comment in our code indicating that we need glibcLocales for setting locale-archive. I googled what locale-archive is, and again I wasn’t convinced that we need it.

Tbh this happened in 2 steps. First, I “found” gcc and then locale and glibcLocales, but it sounds more dramatic if I say I found them all at the same time.

So, the only way to find out if we needed these three packages was to remove them and build the Docker image. After removing them, I could still create the image (docker build . --tag app-optimisation) and run the container. There was no trace of gcc-10.3.0 and glibc-locales-2.32 anymore. All the tests passed, the review app worked. The image size was reduced by 1/3 (from 1.95GB to 1.3GB), affecting the deployment time, which was our goal.

This optimisation would probably not be possible without Nejc’s first dive into the Nix build. And this is what knowledge sharing is all about. When you discover something, please share it with your colleagues, maybe even write a blog post. Who knows, it just might be helpful for someone like me.

Gasper Vozel

Gašper is the Project & Tech Lead of Easy Blog Networks, responsible for keeping our serves humming along nicely.

See other posts »