My team is using the balenalib/rpi-raspbian:stretch and balenalib/rpi-raspbian:buster images as builder images that are used to build omnibus-gitlab packages for Raspbian. I noticed while adding buster support that configuring and compiling the software (git, ruby) on buster takes more time than stretch.
I expect compilation time to take longer when using QEMU as the balenalib Docker images do, but it’s not clear why the buster image takes more time than stretch to build the same software. Anyone run into this before?
Obviously different versions of GCC use different default compile time flags, but as you say, the really interesting bit here is the difference in time between Stretch and Buster using the same version of GCC.
Have you added any finer grained timing data to the builds? Using time for each Dockerfile RUN would be an interesting test, as it would show how long each of the steps are taking individually, as I see there are also pip and gem installations taking place and I’m curious to see if those are taking the same amount of time in both versions, or whether they also differ.
It appears that the issue may be QEMU virtualization, because builds for both ARM debian stretch and buster are pretty slow compared to x86 ubuntu. Here is the data:
This is my experience too. I have found it’s actually faster to configure some swap on a Raspberry Pi 4 to build an Erlang container than it is to try and build it on x86 using emulation (although you wouldn’t believe how many SD cards I’ve killed doing this). Also the emulated builds seem to consume much more RAM.
If you use balena push (or the deprecated git push workflow) then your containers will be built on our native ARM-based build farm. Otherwise I’ve also (personally) had good results using AWS EC2 A1 instances.