Sharing an image among services - Balena Engine not deleting old images on release update.

Hi,

We have an application that currently has 5 services running in it. 3 of the services share the same image, and differ only in the command that they run (and some other docker-compose settings like exposed ports). For example:

services:
  nautilus:
    build: ./nautilus
    image: nautilus
    restart: on-failure
    volumes:
      - 'resin-data:/data'
  ui-bridge:
    build: ./nautilus
    image: nautilus
    restart: always
    command: ["/home/point_one/bootstrap.sh", "./bin/ui-bridge"]
  ...

The generated release appears to be working correctly - all 3 services reference the same image ID, use the same overlay layers, etc. Every time we push an updated release though, balena images lists the new images along with the old ones, which continue to take up space on the device. Doing a balena system prune does not delete the old ones even though there is no active container in the balena ps output listed as using them.

root@f9e118b:~# balena ps
CONTAINER ID        IMAGE                                COMMAND                  CREATED             STATUS                PORTS                    NAMES
33a02fa42faa        1af718dfe938                         "/home/point_one/boo…"   2 minutes ago       Up 2 minutes                                   nautilus_2206721_1356717
59cf46bc699e        1af718dfe938                         "/home/point_one/boo…"   2 minutes ago       Up 2 minutes          0.0.0.0:3000->3000/tcp   ui-bridge_2206723_1356717
095332506e69        1af718dfe938                         "/home/point_one/boo…"   2 minutes ago       Up 2 minutes          0.0.0.0:2101->2101/tcp   ntrip-proxy_2206724_1356717
42a001aad40f        8c31e34d5931                         "nginx -g 'daemon of…"   8 hours ago         Up 8 hours            0.0.0.0:80->80/tcp       eos_2206725_1356717
4e2b1f218e62        2f43b4e7323f                         "python3 -m nautilus…"   2 days ago          Up 2 days             0.0.0.0:8888->8888/tcp   nemo_2206722_1356717
2f67bbb694c1        balena/aarch64-supervisor:v10.6.27   "./entry.sh"             2 weeks ago         Up 2 days (healthy)                            resin_supervisor
root@f9e118b:~# balena images
REPOSITORY                                                       TAG                      IMAGE ID            CREATED             SIZE
registry2.balena-cloud.com/v2/836293a54e44b17fcb72c8f11a4f5398   delta-ee860d766b8acb43   1af718dfe938        4 minutes ago       293MB   <-- active
registry2.balena-cloud.com/v2/224600f68724cc74e016c667709f5b57   delta-2c83a33ae6c12532   1af718dfe938        4 minutes ago       293MB   <-- active
registry2.balena-cloud.com/v2/40a5bc7acf43924671fe4af8de2a81cb   delta-c252b8df76ac5d8f   1af718dfe938        4 minutes ago       293MB   <-- active
registry2.balena-cloud.com/v2/a109cfea3a48bc31931e6cb3c8df43b9   delta-92b5a77553c23901   edeb35fc107e        8 hours ago         289MB   <-- old
registry2.balena-cloud.com/v2/e519ed74f1f8f35e418687297e32ac51   delta-a22c9e17d07fbe87   edeb35fc107e        8 hours ago         289MB   <-- old
registry2.balena-cloud.com/v2/e757f8325951704ae3f6b8d5ef4353a7   delta-73703e7dd92dfdc0   edeb35fc107e        8 hours ago         289MB   <-- old
registry2.balena-cloud.com/v2/24c04cfb263798610985df8587d2ddf0   delta-d96ec52b49d9b0da   8c31e34d5931        8 hours ago         42MB
registry2.balena-cloud.com/v2/4d2bfca86f10de84c5b4629b1518d38c   delta-da073983d890e884   f72ebba5a1bf        2 days ago          289MB   <-- old
registry2.balena-cloud.com/v2/ab07d5ef3f122fa3bf0ed3b0aa8819ce   delta-34683cd98f61813d   f72ebba5a1bf        2 days ago          289MB   <-- old
registry2.balena-cloud.com/v2/fcb317257bd460a845dd6d507314331c   delta-b871d04a9383fd4c   f72ebba5a1bf        2 days ago          289MB   <-- old
registry2.balena-cloud.com/v2/1625d6ad0a8234ed705c42331d9058e2   <none>                   2f43b4e7323f        3 days ago          259MB
balena-healthcheck-image                                         latest                   a29f45ccde2a        3 months ago        9.14kB
balena/aarch64-supervisor                                        v10.6.27                 634e52c7fa89        4 months ago        67MB

From the console output, it does look as though balena push is explicitly building all 3 copies of the same image, and it lists identical deltas for all 3 (vs printing a delta size of 0 bytes for 2 of the 3).

[nautilus]     Step 1/9 : FROM pointonenav/atlas-nautilus-base:latest
[ntrip-proxy]  Step 1/9 : FROM pointonenav/atlas-nautilus-base:latest
[ui-bridge]    Step 1/9 : FROM pointonenav/atlas-nautilus-base:latest
...
[nautilus]     Successfully built 5e483a5b0227
[nautilus]     Successfully tagged nautilus:latest
[ntrip-proxy]  Successfully built fced1867ba79
[ntrip-proxy]  Successfully tagged nautilus:latest
[ui-bridge]    Successfully built 1af718dfe938
[ui-bridge]    Successfully tagged nautilus:latest
...
[Info]         ┌─────────────┬────────────┬────────────┬────────────┐
[Info]         │ Service     │ Image Size │ Delta Size │ Build Time │
[Info]         ├─────────────┼────────────┼────────────┼────────────┤
[Info]         │ nautilus    │ 279.60 MB  │ 43.25 MB   │ 15 seconds │
[Info]         ├─────────────┼────────────┼────────────┼────────────┤
[Info]         │ nemo        │ 246.97 MB  │ 0 bytes    │ 2 seconds  │
[Info]         ├─────────────┼────────────┼────────────┼────────────┤
[Info]         │ ui-bridge   │ 279.60 MB  │ 43.25 MB   │ 16 seconds │
[Info]         ├─────────────┼────────────┼────────────┼────────────┤
[Info]         │ ntrip-proxy │ 279.60 MB  │ 43.25 MB   │ 15 seconds │
[Info]         ├─────────────┼────────────┼────────────┼────────────┤
[Info]         │ eos         │ 40.01 MB   │ 0 bytes    │ 2 seconds  │
[Info]         └─────────────┴────────────┴────────────┴────────────┘

Note that 1af718dfe938 is the tag that actually ended up used by all 3 services on the device.

Is this the correct way build and share a single image across multiple services? We tried omitting the image settings from two of the service definitions per https://stackoverflow.com/questions/50019948/reuse-image-built-by-one-service-in-another-service, but results in the following error:

[nautilus]     Successfully built 1af718dfe938
[nautilus]     Successfully tagged nautilus:latest
[ui-bridge]    [>                                                        ] 0%
[Info]         Uploading images
[Success]      Successfully uploaded images
[Error]        Some services failed to build:
[Error]          Service: ui-bridge
[Error]            Error: pull access denied for nautilus, repository does not exist or may require 'docker login'
[Error]          Service: ntrip-proxy
[Error]            Error: pull access denied for nautilus, repository does not exist or may require 'docker login'

It seems as though Docker is treating the image setting as a remotely hosted image and trying to pull it from the registry, rather than referencing the nautilus image it just built. Is there a different name that is used to push the image to the registry that we should be using?

We’re less concerned about the build generating 3 copies of the same image at the moment, and more about the device storage filling up because the old images are not getting deleted. For now, we were able to untag and delete the old images using balena rmi and balena system prune, but obviously this is not a viable solution.

Thanks in advance,

Adam

Hi,

Your docker-compose.yml has both build and image parameters, which could be the problem. The build: ./nautilus presumes you’re building the image from a Dockerfile in you ./nautilus directory (referring to the image you want in its FROM command). The image: nautilus parameter is downloading a remote image. Deleting either build: ./nautilus or image: nautilus line should prevent the duplication. Can you try that and let us know?

John

Hi @jtonello,

Thanks for the reply. Removing image and leaving build: ./nautilus for the 3 services seems to work: I get 3 top-level images that share all of the same layers under the hood judging by the balena inspect output. I had tried a number of different options with image set on some/all of the services, but either I somehow totally missed no image tags or, more likely, I must have had something else wrong at the time.

I also tried a few updated releases with minor code changes and confirmed that the /var/lib/docker/overlay2 directory is no longer growing without bound. Previously, each updated release caused it to grow by 100s of MB since it was leaving behind rogue copies of the shared image layers.

The docker compose documentation says that setting image and build together tells it what name/tag to use for the generated image (vs doing just image tells it what remote image to download as you mentioned). The main reason I had the image properties set were so the build would know it didn’t need to generate 3 copies of the same image - since the had the same name of the image to be built it would recognize it was building the same thing. It seems that isn’t necessary though since the Docker layers are hermetic so even though there are 3 builds of the same image happening in parallel, they end up with identical layer hashes and the device only needs to download one copy, which is good. It’s a little inefficient on the build server side, but does what we need in the end so that’s good.

I was also hoping naming the images would make debugging the image setup a little easier since the balena images output would indicate which image was used by which service. It looks as though Balena build doesn’t use it to name the final images since you need to generate unique IDs for the repository/tag values - is that correct?

Cheers,

Adam

Hi Adam,

You can look at the images and containers in your application by running a couple simple commands from the shell of the Host OS: balena images will show details for each image (including repos, tags and IDs). Running balena ps will show you which containers are using which images. Even though you’re pointing to the same directory for build: ./nautilus in your docker-compose.yml, you’ll see from the results of the above commands that each image name is unique.

John