Doubts about services sharing an image

I am using docker-compose to implement multiple services, and I’d like many of these services to share the same image. However, when I push this (to a device in local mode), it seems the image is getting built multiple times. Here is a simplified example:

Dockerfile

FROM balenalib/jetson-agx-orin-devkit-alpine-python:latest-build

RUN install_packages \
  wget curl unzip

RUN pip3 install \
  pyzmq paho-mqtt

WORKDIR /usr/src/app
COPY test.c .
RUN gcc -o test test.c

docker-compose.yml

version: '2'

services:
  app1:
    image: app1
    build: .
    command: ["./test", "app1"]

  app2:
    image: app1
    build: .
    command: ["./test", "app2"]

test.c

#include <stdio.h>
#include <unistd.h>

int main(int argc, char *argv[])
{
    printf("This is a test from %s\n", argv[1]);
    sleep(60);
    return 0;
}

When pushing this, I get the following:

[Debug]   Starting builds...
[Build]   [app2] Step 1/8 : FROM balenalib/jetson-agx-orin-devkit-alpine-python:latest-build
[Build]   [app2]  ---> eb2ee48241e5
[Build]   [app2] Step 2/8 : RUN install_packages   wget curl unzip
[Build]   [app1] Step 1/8 : FROM balenalib/jetson-agx-orin-devkit-alpine-python:latest-build
[Build]   [app1]  ---> eb2ee48241e5
[Build]   [app1] Step 2/8 : RUN install_packages   wget curl unzip
[Build]   [app2]  ---> Running in a0a1efd47355
[Build]   [app1]  ---> Running in 44f0adb5bbe1
[Build]   [app2] fetch https://dl-cdn.alpinelinux.org/alpine/v3.16/main/aarch64/APKINDEX.tar.gz
[Build]   [app1] fetch https://dl-cdn.alpinelinux.org/alpine/v3.16/main/aarch64/APKINDEX.tar.gz
[Build]   [app1] fetch https://dl-cdn.alpinelinux.org/alpine/v3.16/community/aarch64/APKINDEX.tar.gz
[Build]   [app2] fetch https://dl-cdn.alpinelinux.org/alpine/v3.16/community/aarch64/APKINDEX.tar.gz
[Build]   [app1] (1/1) Installing unzip (6.0-r9)
[Build]   [app2] (1/1) Installing unzip (6.0-r9)
[Build]   [app2] Executing busybox-1.35.0-r17.trigger
[Build]   [app1] Executing busybox-1.35.0-r17.trigger
[Build]   [app2] OK: 552 MiB in 204 packages
[Build]   [app1] OK: 552 MiB in 204 packages
[Build]   [app1] Removing intermediate container 44f0adb5bbe1
[Build]   [app1]  ---> f994055b39b7
[Build]   [app1] Step 3/8 : RUN pip3 install   pyzmq paho-mqtt
[Build]   [app2] Removing intermediate container a0a1efd47355
[Build]   [app2]  ---> 770472f43fff
[Build]   [app2] Step 3/8 : RUN pip3 install   pyzmq paho-mqtt
[Build]   [app1]  ---> Running in 6a23b3ed6f00
[Build]   [app2]  ---> Running in d03e73d7b4d4
[Build]   [app1] Collecting pyzmq
[Build]   [app2] Collecting pyzmq
[Build]   [app1]   Downloading pyzmq-26.2.1-cp310-cp310-musllinux_1_1_aarch64.whl (1.2 MB)
[Build]   [app2]   Downloading pyzmq-26.2.1-cp310-cp310-musllinux_1_1_aarch64.whl (1.2 MB)
[Build]   [app1]      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 5.1 MB/s eta 0:00:00
[Build]   [app2]      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 5.7 MB/s eta 0:00:00
[Build]   [app1] Collecting paho-mqtt
[Build]   [app2] Collecting paho-mqtt
[Build]   [app1]   Downloading paho_mqtt-2.1.0-py3-none-any.whl (67 kB)
[Build]   [app2]   Downloading paho_mqtt-2.1.0-py3-none-any.whl (67 kB)
[Build]   [app1]      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 67.2/67.2 kB 3.5 MB/s eta 0:00:00
[Build]   [app2]      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 67.2/67.2 kB 4.3 MB/s eta 0:00:00
[Build]   [app1] Installing collected packages: pyzmq, paho-mqtt
[Build]   [app2] Installing collected packages: pyzmq, paho-mqtt
[Build]   [app1] Successfully installed paho-mqtt-2.1.0 pyzmq-26.2.1
[Build]   [app1] WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
[Build]   
[Build]   [app2] Successfully installed paho-mqtt-2.1.0 pyzmq-26.2.1
[Build]   [app2] WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
[Build]   
[Build]   [app1] 
[Build]   [notice] A new release of pip available: 22.2.2 -> 25.0.1
[Build]   [notice] To update, run: pip install --upgrade pip
[Build]   
[Build]   [app2] 
[Build]   [notice] A new release of pip available: 22.2.2 -> 25.0.1
[Build]   [notice] To update, run: pip install --upgrade pip
[Build]   
[Build]   [app1] Removing intermediate container 6a23b3ed6f00
[Build]   [app1]  ---> bc7ea8484d20
[Build]   [app1] Step 4/8 : WORKDIR /usr/src/app
[Build]   [app2] Removing intermediate container d03e73d7b4d4
[Build]   [app2]  ---> ac3cb7757086
[Build]   [app2] Step 4/8 : WORKDIR /usr/src/app
[Build]   [app1]  ---> Running in 841fa7ddf2c2
[Build]   [app2]  ---> Running in 32469d11163b
[Build]   [app1] Removing intermediate container 841fa7ddf2c2
[Build]   [app1]  ---> 34518a740b37
[Build]   [app1] Step 5/8 : COPY test.c .
[Build]   [app2] Removing intermediate container 32469d11163b
[Build]   [app2]  ---> a21f527c790a
[Build]   [app2] Step 5/8 : COPY test.c .
[Build]   [app1]  ---> f180be9139cc
[Build]   [app1] Step 6/8 : RUN gcc -o test test.c
[Build]   [app2]  ---> b45b8128c5d5
[Build]   [app2] Step 6/8 : RUN gcc -o test test.c
[Build]   [app2]  ---> Running in a014ce4ee0fc
[Build]   [app1]  ---> Running in 5740dcb3cc93
[Build]   [app2] Removing intermediate container a014ce4ee0fc
[Build]   [app2]  ---> 1df8c7dc2abb
[Build]   [app2] Step 7/8 : LABEL io.resin.local.image=1
[Build]   [app2]  ---> Running in 4a401d4fa530
[Build]   [app1] Removing intermediate container 5740dcb3cc93
[Build]   [app1]  ---> 0b09238c495f
[Build]   [app1] Step 7/8 : LABEL io.resin.local.image=1
[Build]   [app2] Removing intermediate container 4a401d4fa530
[Build]   [app2]  ---> 42c2f443dcd7
[Build]   [app2] Step 8/8 : LABEL io.resin.local.service=app2
[Build]   [app1]  ---> Running in 2afcd9ec4ff7
[Build]   [app2]  ---> Running in 99e09b7c97d5
[Build]   [app1] Removing intermediate container 2afcd9ec4ff7
[Build]   [app1]  ---> cbe80031eec9
[Build]   [app1] Step 8/8 : LABEL io.resin.local.service=app1
[Build]   [app2] Removing intermediate container 99e09b7c97d5
[Build]   [app2]  ---> fe46a8450448
[Build]   [app2] Successfully built fe46a8450448
[Build]   [app1]  ---> Running in 914087384c4b
[Build]   [app2] Successfully tagged app1:latest
[Build]   [app1] Removing intermediate container 914087384c4b
[Build]   [app1]  ---> 7899a7fe4879
[Build]   [app1] Successfully built 7899a7fe4879
[Build]   [app1] Successfully tagged app1:latest

As can be seen, the image is being built twice.

For our real application, our Dockerfile downloads huge tarballs from the Internet, and builds very large C++ codebases. And we have 6+ services using this image.

Is there a way for the image to only be built once, and then that one image is used for multiple services?

An answer to this question (Balena multiple services and docker "layer deduplication"?) suggests:

You can also achieve the same when building from a local dockerfile by using the build and image properties together.

and describes a similar configuration to mine, but this still builds the image twice.

Hi @pmacfarlanePTL,

Thanks for reaching out about this.

Is there a way for the image to only be built once, and then that one image is used for multiple services?

I’m not sure if you’re familiar with them, but you might consider creating a Block in this case. Basically, a Block allows your common image to be built as part of one project, then used by both of your services in the compose file as part of another project. Which sounds like what you’re looking for, but let me know if I’ve misunderstood.

I am using docker-compose to implement multiple services, and I’d like many of these services to share the same image. However, when I push this (to a device in local mode), it seems the image is getting built multiple times.

To clarify your thought process around how things are working now, I wanted to say that local push (and cloud push for that matter) do not de-duplicate builds pointing to the same path. So, the behavior you’re seeing is expected - both images would be built because the engine does not know they are the same.

But as I say, I think a Block could be a good way to achieve the behavior you prefer.

Let us know if that helps you out!