"Error: no matching manifest for linux/arm64/v8 in the manifest list entries" on GitHub Action

We recently started seeing failures when trying to push to our balenaHub fleet from our GitHub Actions flow. To my knowledge, we haven’t really changed anything that should be affecting this.

The error we’re getting is:

Error: no matching manifest for linux/arm64/v8 in the manifest list entries

Here’s an example of a failed run.

The actual GitHub Actions workflow can be found here.

We haven’t spent much time trying to debug this yet internally, but wanted to see if others were seeing something similar (perhaps due to an update in the Balena CLI or similar).

Hi there,

I haven’t seen this happen myself , using the deploy-to-balena action at master - and the action also haven;t been updated for a week - the CLI also hasn’t updated in 3 weeks - when did you notice your github actions workflow stop working?

It looks like the builder can’t find a matching arm64 image for each of the images that you specify in the docker compose. I saw in the check run that you shared that you are building these images with upstream actions?

I had a brief look through the logs and saw this:

# We use the same target for Pi 3 and Pi 4 for now.
  else
    echo 'TARGET_PLATFORM=linux/arm/v7' >> $GITHUB_ENV
  fi

^ I haven’t got all of the context here of course, but I would ensure that the images that you are creating are targetted at the arm64 architecture, as thats what the pi4 uses

when did you notice your github actions workflow stop working?

The last successful run was three weeks ago. We’ve since had a number of failed run but never got around to investigate it closer. The first failed run happened about a week ago.

I haven’t got all of the context here of course, but I would ensure that the images that you are creating are targetted at the arm64 architecture, as thats what the pi4 uses

We’re using the 64bit OS for Raspberry Pi 4, however, we are using 32 bit images so that they can be shared across pi3 and pi4. We are going to revisit this in the future, but this has definitely worked and was indeed the same during the last successful run.

It would be tricky to identify where exactly the change or break came from. It seems though you are using 64bit pi images? If the containers being referenced are the ones here on the Docker Hub (Docker) then it seems there isn’t v8 builds of those images, some are only v6.

If that error is occurring immediately though, from the cached images, then that is unusual indeed.

Happy to help try and get to the bottom of it, although there may be a better solution to simplifying the build processes. I see there is some use of BOARD_TAG in places, and some sed use too. It is tricky to unpick it all, but for the most part I wonder if there is a way to do the builds without these steps. The Cloud platform will support multi-arch images and hopefully there will be no need to have to think about architecture types in your build processes. For example, you could use FROM alpine in a Dockerfile and push to any of those fleets you maintain (any architecture) and the CLI and Cloud will use the appropriate alpine image automatically. Similarly, if you build images in Docker Hub (although I tend to opt for GitHub Container registry to avoid needing multiple platforms and credentials) and they are built as multi-arch, then you can just add screenly/srly-ose-wifi-connect and it will pull the appropriate architecture automatically.

To clarify, it would involve switching base images from the balenalib to the official buster images. The balenalib images are convenience images, but for those pushing to multiple fleet architectures like in your instance, you may find it better to use the official images instead of the balenalib ones.

Here is an example of a push to GitHub container registry for multiple architectures. The same actions and process can be used with Docker if you change the Docker login credentials and the ghcr.io reference. It will build for all the specified architectures under one multi arch image.

Thanks for the response! I tried reproducing the issue on my monorepo (using the least configuration possible). More specifically, the essential changes lie on this GitHub workflow, where I added linux/arm64/v8 as target platform. Doing so seems to fix the issue (i.e., the “no matching manifest for linux/arm64/v8”).

P.S. You can take a look at this CI run for reference – Make it 64-bit. · nicomiguelino/anthias-poc@f38e5a4 (github.com).

1 Like

Success! Finally got a successful build.

Nico’s lead above is probably the proper solution, but I ended up refactoring our a build process a fair bit in the process.

1 Like

Great @vpetersson, if you have any details about your build process that you think might be relevant to other users having this issue please feel free to share here :smile:

Thank, @Lizzieepton. We’ve sorted it out now. Let’s see if that can be moved to a blog post instead later on.