Balena CLI suddenly fail: "The requested image's platform (linux/amd64) does not match the detected host platform"

Was having same issue yesterday - haven’t checked today.

The problem persists. I created separate fleets for our RPI3’s and RPI4’s and tried to push to these separately, cannot push to either of them.

Any ideas from the folks at Balena? We’re stuck and cannot update any of our devices.


Thanks for reporting this!

I have managed to reproduce this issue and currently looking into it


@heinburgh @jpayne0061 @rodley could you guys share what base images you’re using here please? E.g. balenalib/raspberrypi3-python etc.?

To give a bit more context, what we’re finding is that very old, deprecated base images are the ones that are now failing, whilst newer and maintained ones seem to be working as normal.

Right, what seems to be the case is that the images which are very old and no longer maintained (e.g. balenalib/raspberrypi3-alpine-node:6-3.6-20181025) have an architecture of linux/amd64. This is a problem when you push to a fleet with an arm architecture as the builders then detect a mismatch and fail. These images are in some cases 4 years old and long since deprecated and so we’re not going to go back and update them.

From our side, this is correct behaviour, but I appreciate something that was working yesterday is now failing today which is frustrating. In that case, aside from imploring you to use an updated and maintained image, which I know is not always possible in available timescales, the best way forward may be to just take the base image into your own hands. This means:

  1. On your local machine (with Docker installed and logged in to your own Docker Hub account - you’ll need to create one), create a new Dockerfile with just one line: FROM . Again in the case of the example above the file would just contain FROM balenalib/raspberrypi3-alpine-node:6-3.6-20181025.
  2. In the directory where you just made the Dockerfile run docker buildx build . --platform linux/arm/v7 --tag <your DH username>/raspberrypi3-alpine-node:6-3.6-20181025. Note that we’re specifying here the exact same repository name and tag as we want to use, but you can rename it to what you want at this point. The platform is important as we’re specifying the architecture of the fleet you want to use.
  3. After the build is complete, run docker push <your DH username>/raspberrypi3-alpine-node:6-3.6-20181025.
  4. Update your original Dockerfile to point to the new image in your repository instead of the balenalib one.

The build should then work. Of course this means that the image wont be updated, but in the case of these very old deprecated images they aren’t being updated anyway.

Let us know if this helps.


I tried the steps listed above and the build succeeds

[main]     Successfully built 20d4e5375886
[Info]     Uploading images
[Success]  Successfully uploaded images
[Info]     Built on arm01
[Success]  Release successfully created!
[Info]     Release: 614d8f253cb0308d6af2e6376fe9509e (id: 2490742)
[Info]     ┌─────────┬────────────┬────────────────────┐
[Info]     │ Service │ Image Size │ Build Time         │
[Info]     ├─────────┼────────────┼────────────────────┤
[Info]     │ main    │ 492.70 MB  │ 1 minute, 1 second │
[Info]     └─────────┴────────────┴────────────────────┘
[Info]     Build finished in 1 minute, 23 seconds
                            _.-(6'  \
                           (=___._/` \
                                )  \ |
                               /   / |
                              /    > /
                             j    < _\
                         _.-' :      ``.
                         \ r=._\        `.
                        <`\\_  \         .`-.
                         \ r-7  `-. ._  ' .  `\
                          \`,      `-.`7  7)   )
                           \/         \|  \'  / `-._
                                      ||    .'
                                       \\  (
                                        >\  >
                                    ,.-' >.'

I used the following Dockerfile

FROM balenalib/raspberrypi3-alpine-node:8.17.0-build-20200528

built with docker buildx build . --platform linux/arm/v7 --tag rahulthakoor/raspberrypi3-alpine-node:8.17.0-build-20200528 and pushed to dockerhub

Thanks @chrisys

Hey chrisys - it was indeed an “old” image problem for me (go 1.14). That pilot-error admitted, there are some things for balena to think about here.

  • My fleet has dependencies (GPIO, GRPC) that are a royal pain in the butt to migrate forward. I need images to stay available basically forever. What’s the downside to Balena in leaving images available? Imho, images should have the same availability guarantee as Fin.
  • I’m maintaining a public fleet. When you “disappear” an image it breaks that public fleet. This seems like a bad thing for BalenaHub as much as it is a bad thing for me. Perhaps a little consideration for BalenaHub users?
  • Where do we publish the warning that my build is about to blow up? The only warning I get now is when my nightly build fails.
  • As this thread makes abundantly clear, we need better error messages when our base image has been “disappeared”. I’m sitting here with a build that worked yesterday and fails today with a completely misleading error message and I didn’t change anything. The only reasonable conclusion is that Balena itself is broken.
  • Finding base images - I’ve been doing Balena for a long while now and I just recently found the actual base images URL (base-images/balena-base-images at master · balena-io-library/base-images · GitHub). This link needs to be posted all over the place so dopes like me who are lucky enough to guess that they have a disappeared image problem can easily tell if their image has been ‘disappeared’.
  • “Disappeared image” to me should be a preprocessing check in the “balena push”. < 100 lines of code that produces one of two messages “Image found, proceeding” - “Image NOT found, aborting”.

My two cents.

1 Like

@chrisys I am currently trying to use balenalib/rpi-raspbian (linux/arm/v6) as the base image, which appears to be maintained.

The target device is an rpi 4, 64 bit.

I noticed this message in the build logs: The requested image's platform (linux/arm/v6) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested

Later, I get an error: [balena-cam] failed to get destination image "sha256:e1a6df702de55efd1d12527db6bf6b5fe4e8a6276de3404ead94a940c158b3f5": image with reference sha256:e1a6df702de55efd1d12527db6bf6b5fe4e8a6276de3404ead94a940c158b3f5 was found but does not match the specified platform: wanted linux/arm64/v8, actual: linux/arm/v6

However, a 32 bit OS (linux/arm/v6) should be able to run on a 64 bit OS (in this case, arm64/v8)

Am I missing something?

1 Like

Hey @rodley I hear all your points, thanks for sharing everything here. There’s no downside to keeping the images available for historical reasons, but it does cause issues such as this when we’re trying to push forward with the development of the builder and engine but are held back by continuing to support very old things. I believe we also have to pay storage costs but would rather continue to provide a resource that helps folks like you to keep building. Just to be clear though, no images have been removed here nor is there any current plan to remove them!

As for finding the base images, we do have plans to improve that. It really needs a search/filter tool. They have been too hard to find for a very long time!

@jpayne0061 thanks for the extra detail here - given the use case you’ve detailed is also affected it does appear that we’ve regressed here. It should be possible to use the 32 bit containers on a 64 bit device. We’ll continue to look into it.

Can you confirm the default device type on your application or fleet?

I think your fleet is set to an aarch64 default device type? So that’s what our builders are using when pushing releases. However as you saw in the the logs, the base image is armv6 so the builder is complaining.

You are correct that this should work, aarch64 is definitely capable of running armv6 images. We are investigating internally to see what can be done to resolve this regression.

In the meantime, if you try pushing the same project to a fleet with an armv6 default device type, does it work?

Also, could you share your Dockerfile FROM line?

Tnx for the response. Balena rocks.


here is the docker FROM line:

FROM balenalib/rpi-raspbian

As for device type, its is Raspberry Pi 4 (using 64bit OS)

The architecture is aarch64

I’m quite new to balena, so I don’t know if these are the same as default device type. Not trying to be pedantic! I really do not know.



@klutchell forgot to address your other question

In the meantime, if you try pushing the same project to a fleet with an armv6 default device type, does it work?

I don’t have access to a fleet with armv6 at this moment, sorry

If you make a new fleet (for testing) and select Raspberry Pi 3 (not the 64-bit version) as the default device type, I suspect that you should be able to push to that fleet as a workaround while we investigate.

@klutchell oh, I see what you’re saying. Thank you

We are currently experimenting with balena and this isn’t a production issue for us.

I’ve since selected another image to experiment with

By the way I am impressed with the product so far. Its awesome!



I have followed the steps above and successfully created a release image for one of our fleets however when trying to create a release image for another fleet with the same default device type of (aarch64) I am still getting the same error again despite using the newly created base image:
failed to get destination image "sha256:9fbae44e7414963313d4a0f2ba517eecfe9b39289e5526afebf6d50e79d43b62": image with reference sha256:9fbae44e7414963313d4a0f2ba517eecfe9b39289e5526afebf6d50e79d43b62 was found but does not match the specified platform: wanted linux/arm64/v8, actual: linux/amd64

Here are the steps I took to create the docker build:
Created a Dockerfile with the line:
FROM balenalib/raspberrypi4-64-ubuntu-python:3-bionic-build-20200915
Then ran:
docker buildx build . --platform linux/arm64/v8 --tag sophiahaoui/raspberrypi4-64-ubuntu-python:3-bionic-build-20200915
Then pushed the build:
docker push sophiahaoui/raspberrypi4-64-ubuntu-python:3-bionic-build-20200915

The docker build is here: Docker and does appear to have the correct architecture linux/arm64

Could there be any other fleet settings that could be causing an issue here?

Thank you for the additional investigation, Sophia. We are still investigating on our side and that is a big help!

Just to be clear the error you are reporting happens at the end of a balena push to a fleet, correct?
Does it appear from the logs that all the steps are successful and the error happens at the end of the build, after “Successfully uploaded images” is reported?

Would you mind sharing the logs from your balena push attempt, and maybe the relevant parts of the Dockerfile you are pushing to the fleet?

@sophiahaoui did you then update your original Dockerfile to point to your new image? It sounds like it’s still pulling the old image if it’s saying it’s linux/amd64, as I agree, your new one looks correct and is showing linux/arm64. Alternatively share your Dockerfile as my esteemed colleague suggested above :slight_smile:

Here are the logs from the attempted build

| Packaging the project source...[Warn]     
[Warn]    -----------------------------------------------------------------------------------------
[Warn]    The following .dockerignore file(s) will not be used:
[Warn]    * /Users/sophiahaoui/NewSunRoad/Solsense/balena/solmon/node_modules/sqlite3/.dockerignore
[Warn]    When --multi-dockerignore (-m) is used, only .dockerignore files at the
[Warn]    root of each service's build context (in a microservices/multicontainer
[Warn]    fleet), plus a .dockerignore file at the overall project root, are used.
[Warn]    See "balena help push" for more details.
[Warn]    -----------------------------------------------------------------------------------------
[Info]     Starting build for customers_gen25, user shaoui
[Info]     Dashboard link:
[Info]     Building on arm01
[Info]     Pulling previous images for caching purposes...
[Success]  Successfully pulled cache images
[solmon]   Step 1/30 : FROM sophiahaoui/raspberrypi4-64-ubuntu-python:3-bionic-build-20200915 as build
[solmon]    ---> 17ea33c69306
[solmon]   Step 2/30 : WORKDIR /usr/src/app
[solmon]   Using cache
[solmon]    ---> 9fbae44e7414
[solmon]   Step 3/30 : COPY /lib/requirements3.txt /lib/requirements3.txt
[solmon]   failed to get destination image "sha256:9fbae44e7414963313d4a0f2ba517eecfe9b39289e5526afebf6d50e79d43b62": image with reference sha256:9fbae44e7414963313d4a0f2ba517eecfe9b39289e5526afebf6d50e79d43b62 was found but does not match the specified platform: wanted linux/arm64/v8, actual: linux/amd64
[Info]     Uploading images
[Success]  Successfully uploaded images
[Error]    Some services failed to build:
[Error]      Service: solmon
[Error]        Error: failed to get destination image "sha256:9fbae44e7414963313d4a0f2ba517eecfe9b39289e5526afebf6d50e79d43b62": image with reference sha256:9fbae44e7414963313d4a0f2ba517eecfe9b39289e5526afebf6d50e79d43b62 was found but does not match the specified platform: wanted linux/arm64/v8, actual: linux/amd64
[Info]     Built on arm01
[Error]    Not deploying release.
Remote build failed

And here are the Dockerfile lines for those first steps that get through:

FROM sophiahaoui/raspberrypi4-64-ubuntu-python:3-bionic-build-20200915 as build
WORKDIR /usr/src/app

COPY /lib/requirements3.txt /lib/requirements3.txt
RUN pip3 install -r /lib/requirements3.txt

Let me know what else would be helpful!