Changing build context forces a single Dockerfile

When developping balena applications, we usually maintain two seperate dockerfiles:

  • Dockerfile uses standard x86 images, for developping on our workstations
  • Dockerfile.template uses balenalib images, using variables for the hardware type

This works without issue for most use cases, but sometimes we have to override the default build context in order to access files that are outside of the service folder (shared configuration between services most of the time).
The compose file reference instructs us to use the build, context and dockerfile instructions in order to achieve such a thing. The only issue is that by specifying the Dockerfile, we lose balena’s Dockerfile template feature.
Consider the following example:

$ tree .
.
|-- docker-compose.yml
|-- file.txt
`-- service1
    |-- Dockerfile
    `-- Dockerfile.template
# docker-compose.yml
version: "2.0"
services:
  service1:
    build:
      context: .
      dockerfile: "./service1/Dockerfile.template"
# service1/Dockerfile
FROM node:14
COPY file.txt .
# service1/Dockerfile.template
FROM balenalib/%%BALENA_MACHINE_NAME%%-node:14-build
COPY file.txt .
$ docker-compose build
Building frontend
Step 1/2 : FROM balenalib/%%BALENA_MACHINE_NAME%%-node:14-build
ERROR: Service 'service1' failed to build : invalid reference format: repository name must be lowercase

As you can see, our build fails on our workstation because we override the default dockerfile. If we omit the .template part, then our production build uses the wrong Dockerfile.

Is there a solution to this issue? I checked the docs and forums but couldn’t find any.
Thanks,
Erwan

Hello @edorgeville thanks for your question.

Check here in our documentation how to work with multiple Dockerfiles and define device-types or architectures to enable one or the other during the build times.

Let me know if that helps

Hi @mpous,
Unfortunately this documentation doesn’t take in consideration my use case. The “order of preference” is overidden entirely by the dockerfile instruction.
Thanks

Hi @edorgeville,

The error makes clear that docker-composedoesn’t understand the balena specific template variables, in this case %%BALENA_MACHINE_NAME%%.
A way to get this to work is of course to maintain a separate Dockerfile w/ the template variables replaced and point the docker-compose.yml to that one instead of the template whenever you want to do a local build.

Can you clarify whether you have a specific reason to use node:14 for your local builds? If not, then using the respective x86 balenalib image for builds that you do and run on your computer should work fine and make the build results have less differences with the images that reach your devices. In this case for example, replacing %%BALENA_MACHINE_NAME%% with intel-nuc (which is what our builders do when building for a NUC) should generate a proper amd64 final image. Similarly, you can replace it with intel-edison, so that your build uses an i386 base image.

An alternative that would also remove the need of maintaining a separate Dockerfile just for your local x86/amd64 builds would be to use the balena-cli for your local builds as well and just point it to the appropriate arcitecture. For example you could cd to the directory with your docker-compose.yml, remove the dockerfile: "./<FOLDER_NAME>/Dockerfile.template" entries, delete the extra Dockerfiles Just keep the Dockerfile.templates and then run:

balena build --deviceType intel-nuc --arch amd64

This will build an amd64 image which you should be able to run on your computer normally.

Let me also point you to our documentation page about the balena build command:

Let us know whether that covers your use case.

Kind regards,
Thodoris