Failed to preload image with error "layers from manifest don't match image configuration"

We’re having serious issues updating the firmware on one of our devices. On updating the systems we get errors of the form:

Failed to download image 'registry2.balena-cloud.com/v2/27602604baf334b802ac81b898123c47@sha256:43164eb17d96b919c2b2cf706bc16f1ba7911fff526c1a4520791bbf59254b8d' due to 'layers from manifest don't match image configuration'

Hence our current course of action is to re-flash the system with BalenaOS. We tried creating an image with pre-loaded software of the latest release. However, this appears to give the same error:

`Pulling 7 images [============= ] 55%

  • Cleaning up temporary files
    layers from manifest don’t match image configuration`

This is done with balena-cli version 12.48.13. I could reproduce this for genericx86-64-ext-2.58.6+rev1-dev-v11.14.0.img and genericx86-64-ext-2.83.18+rev1-dev-v12.10.3.img for two different software releases (including what we consider ‘stable’).

Here’s the code run with --debug on:

`

  • Resizing partitions and waiting for dockerd to startResizing ext4 filesystem of partition n°6 of /tmp/tmp5ldjjehk/opt/resin-image-genericx86-64-ext.resinos-img using /dev/loop19
  • Resizing partitions and waiting for dockerd to startFile system OK
    \ Resizing partitions and waiting for dockerd to startWaiting for Docker to start…
    \ Resizing partitions and waiting for dockerd to startDocker started

Pulling 7 images [===== ] 20%
/ Cleaning up temporary files
layers from manifest don’t match image configuration

Error: layers from manifest don’t match image configuration
at Stream.jsonStream.on (/snapshot/versioned-source/node_modules/docker-progress/build/index.js:30:27)
at Stream.emit (events.js:198:13)
at Stream.EventEmitter.emit (domain.js:448:20)
at drain (/snapshot/versioned-source/node_modules/through/index.js:36:16)
at Stream.stream.queue.stream.push (/snapshot/versioned-source/node_modules/through/index.js:45:5)
at Parser.exports.parse.parser.onToken (/snapshot/versioned-source/node_modules/JSONStream/index.js:132:18)
at Parser.proto.write (/snapshot/versioned-source/node_modules/jsonparse/jsonparse.js:135:34)
at Stream. (/snapshot/versioned-source/node_modules/JSONStream/index.js:23:12)
at Stream.stream.write (/snapshot/versioned-source/node_modules/through/index.js:26:11)
at IncomingMessage.ondata (_stream_readable.js:710:20)
at IncomingMessage.emit (events.js:198:13)
at IncomingMessage.EventEmitter.emit (domain.js:448:20)
at addChunk (_stream_readable.js:288:12)
at readableAddChunk (_stream_readable.js:269:11)
at IncomingMessage.Readable.push (_stream_readable.js:224:10)
at HTTPParser.parserOnBody (_http_common.js:124:22)
at Socket.socketOnData (_http_client.js:451:20)
at Socket.emit (events.js:198:13)
at Socket.EventEmitter.emit (domain.js:448:20)
at addChunk (_stream_readable.js:288:12)
at readableAddChunk (_stream_readable.js:269:11)
at Socket.Readable.push (_stream_readable.js:224:10)
at TCP.onStreamRead (internal/stream_base_commons.js:94:17)
From previous event:
at runCallback (timers.js:705:18)
at tryOnImmediate (timers.js:676:5)
at processImmediate (timers.js:658:5)
at process.topLevelDomainCallback (domain.js:126:23)
From previous event:
at preload.Bluebird.resolve.then (/snapshot/versioned-source/node_modules/balena-preload/build/preload.js:751:28)

Sounds like an architecture mismatch. Sometimes you get manifest issues if you pull for example an image designed for a Raspberry pi and run it on windows, or Mac. Same applies to Balena. If you pushed an image to a fleet in the cloud, and the fleet is configured to be for ARM v7 or v8 architecture devices (i.e. Raspberry pi 4) and then try and preload that to an image for a Raspberry Pi Zero (armv6) it would fail.

You can replicate this on your local system. Do a docker run command for an image not compatible with your computer (i.e. an image for a Raspberry Pi) to your Mac and it will fail with a manifest issue on run.

Check that the fleet configuration matches the image you are trying to preload. In this case it looks like you need a fleet configured for x86-64 (the same as the images you are listing).

Here you see the default device type being selected which specifies that Fleet architecture: Get started with QEMU X86 64bit and Python - Balena Documentation

Hello, this also happens when layers get out of sync (e.g. modifying layers from host OS directly in /mnt/data/docker/…). To repair, you can identify which container uses this image using docker inspect ...) and then run docker rm {{id}} --force to remove the container and `docker rmi {{id}``` to remove the image. Afterwards, restarting the supervisor should get everything back in sync.

1 Like