How to control architecture used by builders?

Hello, I was unable to post my question here, so I have put it in a pastebin instead:
Thanks in advance for any assistance you can offer.

1 Like

FYI, the diagnostic this forum displayed when I attempted to post to it was (vaguely, from memory): “Sorry, new users are only allowed to post 2 URLs.”.

My post contains only two URLs, so it is quite unclear why the post process failed. Perhaps the URL detection is not working as well as it should.

Aha, I have figured out how to defeat this forum. Please convert all instances hereinafter of the capitalised name of a tasty, yellow fruit to the character known as period, full stop and dot.


I have an application with architecture==armv7hf (ie, 32-bit with an FPU), containing one device (a raspberry pi 3b+, which supports both 32-bit armv7hf and 64-bit arm64), and a Dockerfile that says in relevant part:

    image: postgres:9

Note that, as far as I can tell from:
… this ‘postgres’ image is a multi-architecture image that supports arm32 v7, although this is not explicitly stated anywhere that I can find.

What I would like to happen is

  • The build tool (Balena, Docker [Compose], the builder, or whatever else reads this docker-compose file) shall determine the target platform (in this case, the builder shall see that the application’s architecture is set to armv7hf)
  • The build tool shall build images that are compatible with the target architecture.

What actually happens is

  • The builder produces images containing arm64 binaries, even when targeting an application with architecture ‘armv7hf’
  • The image download wizard thingy in the console spits out a 32-bit armv7hf ResinOS image when you add a ‘raspberry pi 3’ device to your project
  • As a result, when the device downloads and attempts to run the newly-built containers, it attempts to run 64-bit ARM binaries on a 32-bit ARM kernel running on 64-bit-capable ARM hardware, which can never work
  • So containers restart continually and spam the logs with ‘exec format error’

I don’t understand why the builder is producing arm64 binaries. This appears to me to be a bug, as I did not specify ‘arm64’ as the target platform for my application (I don’t even know of a way to do that).


  • Is there a way to force the builders to build a certain architecture to work around this?
  • Alternatively, if there’s no way to force the builder to use the correct architecture, then can I build images locally instead, thus disusing the builder? (My assumption here is that Docker will choose the correct architecture).
  • Alternatively, is there an arm64 build of ResinOS? Migrating both the kernel and userspace to 64-bit on this dual-arch-capable hardware would also be a workaround.
  • Is this simply due to the following TODO item?

Another idea - if I replaced the 32-bit-only kernel in the ResinOS ARM32 image with a 64-bit one that has support for 32-bit userspace enabled, then I would have 32/64-bit hardware, a 32/64 bit kernel, a 32-bit ResinOS userspace including Balena, and 64-bit container images. Would this combination work?

Hi, still reading up on what you’ve posted, so we understand and try to recreate what happens. Though another workaround wouldn’t be using that arm32v7/postgres image in your case?

Our ARM builders are aarch64, but when there are packages actually compiled, they should spit out binaries with the right architecture. If you are pulling in the image like that, it is not clear to me that there’s a compilation, so maybe it comes down to how the image itself detects things? Since the service definition you set is an unmodified image, I think our builders don’t come into the picture at all, it should happen all runtime…

I was hoping to avoid using architecture prefixes such as “arm32v7/postgres”, as I want to keep the docker-compose YAML and Dockerfiles architecture independent. But yes, that would be a workaround. Thanks. BTW, I do know about the Dockerfile/docker-compose templates feature, %%RESIN_ARCH%% etc.

Your explanation is excellent - you may be right; this issue may have nothing to do with building the wrong image, and may instead be due to Balena pulling the wrong pre-built image. I’ll try to find out what Balena is doing.

At the moment, Balena’s logs seem insufficient to figure out what image(s) it pulls and why, so I’m intending to reverse-engineer Balena using strace and/or capture its network traffic using wireshark.
This will be quite laborious, so if anyone knows of a better way to debug Balena’s choice of image, please share it with me.