Hi,
We have an application that currently has 5 services running in it. 3 of the services share the same image, and differ only in the command that they run (and some other docker-compose settings like exposed ports). For example:
services:
nautilus:
build: ./nautilus
image: nautilus
restart: on-failure
volumes:
- 'resin-data:/data'
ui-bridge:
build: ./nautilus
image: nautilus
restart: always
command: ["/home/point_one/bootstrap.sh", "./bin/ui-bridge"]
...
The generated release appears to be working correctly - all 3 services reference the same image ID, use the same overlay layers, etc. Every time we push an updated release though, balena images
lists the new images along with the old ones, which continue to take up space on the device. Doing a balena system prune
does not delete the old ones even though there is no active container in the balena ps
output listed as using them.
root@f9e118b:~# balena ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
33a02fa42faa 1af718dfe938 "/home/point_one/boo…" 2 minutes ago Up 2 minutes nautilus_2206721_1356717
59cf46bc699e 1af718dfe938 "/home/point_one/boo…" 2 minutes ago Up 2 minutes 0.0.0.0:3000->3000/tcp ui-bridge_2206723_1356717
095332506e69 1af718dfe938 "/home/point_one/boo…" 2 minutes ago Up 2 minutes 0.0.0.0:2101->2101/tcp ntrip-proxy_2206724_1356717
42a001aad40f 8c31e34d5931 "nginx -g 'daemon of…" 8 hours ago Up 8 hours 0.0.0.0:80->80/tcp eos_2206725_1356717
4e2b1f218e62 2f43b4e7323f "python3 -m nautilus…" 2 days ago Up 2 days 0.0.0.0:8888->8888/tcp nemo_2206722_1356717
2f67bbb694c1 balena/aarch64-supervisor:v10.6.27 "./entry.sh" 2 weeks ago Up 2 days (healthy) resin_supervisor
root@f9e118b:~# balena images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry2.balena-cloud.com/v2/836293a54e44b17fcb72c8f11a4f5398 delta-ee860d766b8acb43 1af718dfe938 4 minutes ago 293MB <-- active
registry2.balena-cloud.com/v2/224600f68724cc74e016c667709f5b57 delta-2c83a33ae6c12532 1af718dfe938 4 minutes ago 293MB <-- active
registry2.balena-cloud.com/v2/40a5bc7acf43924671fe4af8de2a81cb delta-c252b8df76ac5d8f 1af718dfe938 4 minutes ago 293MB <-- active
registry2.balena-cloud.com/v2/a109cfea3a48bc31931e6cb3c8df43b9 delta-92b5a77553c23901 edeb35fc107e 8 hours ago 289MB <-- old
registry2.balena-cloud.com/v2/e519ed74f1f8f35e418687297e32ac51 delta-a22c9e17d07fbe87 edeb35fc107e 8 hours ago 289MB <-- old
registry2.balena-cloud.com/v2/e757f8325951704ae3f6b8d5ef4353a7 delta-73703e7dd92dfdc0 edeb35fc107e 8 hours ago 289MB <-- old
registry2.balena-cloud.com/v2/24c04cfb263798610985df8587d2ddf0 delta-d96ec52b49d9b0da 8c31e34d5931 8 hours ago 42MB
registry2.balena-cloud.com/v2/4d2bfca86f10de84c5b4629b1518d38c delta-da073983d890e884 f72ebba5a1bf 2 days ago 289MB <-- old
registry2.balena-cloud.com/v2/ab07d5ef3f122fa3bf0ed3b0aa8819ce delta-34683cd98f61813d f72ebba5a1bf 2 days ago 289MB <-- old
registry2.balena-cloud.com/v2/fcb317257bd460a845dd6d507314331c delta-b871d04a9383fd4c f72ebba5a1bf 2 days ago 289MB <-- old
registry2.balena-cloud.com/v2/1625d6ad0a8234ed705c42331d9058e2 <none> 2f43b4e7323f 3 days ago 259MB
balena-healthcheck-image latest a29f45ccde2a 3 months ago 9.14kB
balena/aarch64-supervisor v10.6.27 634e52c7fa89 4 months ago 67MB
From the console output, it does look as though balena push
is explicitly building all 3 copies of the same image, and it lists identical deltas for all 3 (vs printing a delta size of 0 bytes for 2 of the 3).
[nautilus] Step 1/9 : FROM pointonenav/atlas-nautilus-base:latest
[ntrip-proxy] Step 1/9 : FROM pointonenav/atlas-nautilus-base:latest
[ui-bridge] Step 1/9 : FROM pointonenav/atlas-nautilus-base:latest
...
[nautilus] Successfully built 5e483a5b0227
[nautilus] Successfully tagged nautilus:latest
[ntrip-proxy] Successfully built fced1867ba79
[ntrip-proxy] Successfully tagged nautilus:latest
[ui-bridge] Successfully built 1af718dfe938
[ui-bridge] Successfully tagged nautilus:latest
...
[Info] ┌─────────────┬────────────┬────────────┬────────────┐
[Info] │ Service │ Image Size │ Delta Size │ Build Time │
[Info] ├─────────────┼────────────┼────────────┼────────────┤
[Info] │ nautilus │ 279.60 MB │ 43.25 MB │ 15 seconds │
[Info] ├─────────────┼────────────┼────────────┼────────────┤
[Info] │ nemo │ 246.97 MB │ 0 bytes │ 2 seconds │
[Info] ├─────────────┼────────────┼────────────┼────────────┤
[Info] │ ui-bridge │ 279.60 MB │ 43.25 MB │ 16 seconds │
[Info] ├─────────────┼────────────┼────────────┼────────────┤
[Info] │ ntrip-proxy │ 279.60 MB │ 43.25 MB │ 15 seconds │
[Info] ├─────────────┼────────────┼────────────┼────────────┤
[Info] │ eos │ 40.01 MB │ 0 bytes │ 2 seconds │
[Info] └─────────────┴────────────┴────────────┴────────────┘
Note that 1af718dfe938
is the tag that actually ended up used by all 3 services on the device.
Is this the correct way build and share a single image across multiple services? We tried omitting the image
settings from two of the service definitions per https://stackoverflow.com/questions/50019948/reuse-image-built-by-one-service-in-another-service, but results in the following error:
[nautilus] Successfully built 1af718dfe938
[nautilus] Successfully tagged nautilus:latest
[ui-bridge] [> ] 0%
[Info] Uploading images
[Success] Successfully uploaded images
[Error] Some services failed to build:
[Error] Service: ui-bridge
[Error] Error: pull access denied for nautilus, repository does not exist or may require 'docker login'
[Error] Service: ntrip-proxy
[Error] Error: pull access denied for nautilus, repository does not exist or may require 'docker login'
It seems as though Docker is treating the image
setting as a remotely hosted image and trying to pull it from the registry, rather than referencing the nautilus
image it just built. Is there a different name that is used to push the image to the registry that we should be using?
We’re less concerned about the build generating 3 copies of the same image at the moment, and more about the device storage filling up because the old images are not getting deleted. For now, we were able to untag and delete the old images using balena rmi
and balena system prune
, but obviously this is not a viable solution.
Thanks in advance,
Adam