How to avoid adding config files to the docker images

Most docker images use config files mounted from the host which doesn’t work with the balena workflow and using a compose file.
I am trying to come up with a way to create files from an env variable so that these can be used when starting a docker image. This will allow to use upstream images and not keep any config in the docker image itself making those more universal and provide new configs at runtime.

For example:
config_env = “some config text” - env var set in the balena GUI
start a docker container like docker run prometheus/prometheus --config.file=$(echo $config_env > /env_files/config_env && echo /env_files/config_env

I am not sure if this would work , but will test it shortly.

Do you think it would be possible to have this as part of balena os/cli? when you have an env var as config_env you can directly use it as a file /env_files/config_env

1 Like

Hi @krasi-georgiev ,

Your solution should work without issues as the Supervisor will inject the environment variables you’ve defined into the service before it runs and another potential way is to include them encoded as part of a docker-compose.yml file. We don’t currently have any plans to allow environment variables to be mounted as part of the filing system, however. Another potential way to solve this is to retrieve configuration from a remote server on startup, which I believe some other customers do.

Best regards,

Heds

thanks will try and report

The device won’t be connected to the internet at all times so can’t use this solution.

unforunately this didn’t work:
ValidationError: Bind mounts are not allowed

here is the compose file entry:

  prometheus:
    image: prom/prometheus
    ports:
      - 9090:9090
    volumes:
      - $(echo $PROMETHEUS_CONFIG > ./PROMETHEUS_CONFIG && echo./PROMETHEUS_CONFIG:/etc/prometheus/prometheus.yml)

could you suggest some other workaround?

One workaround for this would be to use the target docker image as a base, and start the container with a script that first creates the config file from the PROMETHEUS_CONFIG env var before starting the service. For example something like this could work (untested):
Dockerfile

FROM prom/prometheus:latest
WORKDIR /usr/src/app
COPY run-prom.sh ./
CMD [ "./run-prom.sh" ]

run-prom.sh

echo $config_env > /env_files/config_env
./bin/prometheus --config.file=/env_files/config_env
1 Like

This means maintaining a copy of lots of upstream images which is what I want to avoid at all cost.
Maintaining these means updating versions, dealing with multi arch images etc.

Can you think of any other workaround or would you consider adding this feature to the balena cli tool or maybe just removing the ValidationError would allow to use the workaround I suggested.

I can’t think of any other workarounds in this instance, though this does seem to be the approach many people are taking to get around prometheus not accepting environment variables as configuration.

It is not just Prometheus I am using few other images in this project and none of them allow using env variables as configurations. I think all upstream images that I have used so far don’t use env variables as configs(just for some simple cli args like username, pass etc).

In the balena worlds I think this is a great limitation and doesn’t allow reusing the same image for a different project as the config needs to be baked in the image.

As an alternative, but very similar, solution to that you can run little sidecar container that sets up those environment variables on a volume mount for you and then share them with the appropriate services. Make sure you make the prometheus service dependent on the config- sidecar.

Thanks that is a good suggestion, I will try it. This will allow to use upstream images without any changes.

Btw what is the reason you don’t want to add this feature to balena os or the balena cli?

Hi @krasi-georgiev ,

One thing to note with the sidecar solution is you’ll need to ensure the applications using the volumes configured by the sidecar keep testing for the file until it’s available, as a dependency on another service container only ensures that the container has started, and not that a process has for example initiated.

Most of our customers use either hardcoded values in their docker-compose files or set via the dashboard and CLI for use cases like this. However, you’re right in that this isn’t a great case for when there’s no guarantee that there will be an internet connection. In general, we don’t like adding any features to our docker-compose schema that don’t map to standard docker-compose features. However, I do wonder in this case if there’s another we could do it for offline support.

I’m going to raise an issue internally to discuss this use case. We’ll let you know when we have more information on this discussion.

Best regards,

Heds

Hm , yeah I will try different ways with the sidecar and will report if anything worked.

Most of our customers use either hardcoded values in their docker-compose files or set via the dashboard and CLI for use cases like this.

I didn’t quite understand what you mean by this? You can only set env variables there and not files.

In general, we don’t like adding any features to our docker-compose schema that don’t map to standard docker-compose features.

I don’t think the official docker-compose tool disallows host bind mounts so maybe just removing this restriction in the balena cli will allow using my original suggestion.

I’m going to raise an issue internally to discuss this use case. We’ll let you know when we have more information on this discussion.

thanks, greatly appreciated! :+1:

Hi again,

Most of our customers use either hardcoded values in their docker-compose files or set via the dashboard and CLI for use cases like this.
I didn’t quite understand what you mean by this? You can only set env variables there and not files.

So you could, for example, base64 a config file in an environment variable, and on service startup decode this variable and then pipe it to a named volume.

Whilst you’re absolute right you can use host bind mounts, we don’t support this functionality because it can potentially be extremely easy for people to bind something vital into the container and then put the system into a state that we couldn’t recover.

Hopefully we can come to a good solution for you when the issue’s discussed!

Best regards,

Heds

I think this would work for me as well, could you give an example within a compose file?

The flow would be something like this. Imagine, that your config file has this contents:

value1=one

value2=two

value3=three

(can be anything for this example, and added spaces, just to show that it works too).
Now, on your work machine, if you use Linux, for example, you can use the base64 tool to encode this file. Say the file is called some.cfg, you would go:

~> base64 some.cfg 
dmFsdWUxPW9uZQoKdmFsdWUyPXR3bwoKdmFsdWUzPXRocmVlCg==

meaning that string above is the base64-encoded version of the file.
Add this as an environment variable, let’s call it MY_SOME_CFG, in your docker-compose.yml you could add:

...
servicename:
  ...
  environment:
    - MY_SOME_CFG="dmFsdWUxPW9uZQoKdmFsdWUyPXR3bwoKdmFsdWUzPXRocmVlCg=="
...

where the ... is just other fields.
Then in your start script, you can do:

echo "$MY_SOME_CFG" | base64 -d > /path/to/some.cfg

which would decode your environment variable, and save it to that path set above.

How does this sound? The advantage of this as well, that by setting alternate values in the dashboard to MY_SOME_CFG you can override the ones set in the docker-compose.yml (the compose value is overridden by the fleet variable, which is overriden by the device variable).

You can try out encoding using other tools, such as this website, if that’s more practical: https://www.base64encode.org/ (not endorsing the site, just thinking that it might be helpful if you don’t have a Linux development machine, and want to see how base64 encoding/decoding works)

This is more or less what I already tried and the problem is how would you mount /path/to/some.cfg inside a container (lets say prometheus for this example)?
You need to write this file on the host, but mounting files from the host is disabled.

Many apologies, but I think maybe we’ve not explained the volume storage correctly. Whilst there is indeed no bind mounting of volumes for the host, you can define named storage volumes that are persistent and can be shared between services.

What we’re suggesting is:

  1. Store configuration as encoded environment variable ‘somewhere’ (docker-compose file, UI, etc)
  2. Decode configuration file and store into a persistent named storage volume on the host. You can bind any path in the container to the named volume, for example in your docker-compose file:
volumes:
  storage-volume:
services:
  prometheus:
    image: ...
    volumes:
      - 'storage-volume:/path/to'
  1. echo "$MY_SOME_CFG" | base64 -d > /path/to/some.cfg in your entry script for your service. This now contains a configuration that is persistent across reboots, etc., is bound to the /path/to directory within the prometheus service and can be used by any other service by using the storage-volume persistent volume.

We’re not sure why you need to store any data on the host FS, as there shouldn’t be a requirement to do so for any services.

Does this make sense? Could you please explain a bit about your thinking for using host FS space?

Best regards,

Heds

Yes I am aware of the storage volumes and I am using these to persists data, but I am missing some points on how would you write this config file and use it without using some custom Prometheus image with a custom entrypoint?

Using a sidecar service that converts all env variables to files and writing them to a shared volume was a good idea, but there is no guarantee that prometheus will not start before these files are actually written. Anyway I will try this suggestion and will report back if I could make it work as expected.

Hi again,

Sorry, I understand now. Yes, you’d need to create a new Docker image based on the Prometheus image that used the custom script.

Best regards,

Heds

Yeah, that is a tricky thing, regarding the timing of the updates. The previous suggesion was using a pattern where there’s a start script that runs in a service and does some logic, then starts up your main process. I guess here you are using an upstream/official Prometheus container?

Maybe you can combine this sidecar (and shared volume) patter with triggering a prometheus config reload through its API? I’ve found this blogpost https://www.robustperception.io/reloading-prometheus-configuration so could I guess call /-/reload endpoint on the prometheus container?
If the networking set up properly and container networking is used, the service name can be used so say prometheus service could be triggered with curl -X POST http://prometheus:9090/-/reload (ports might need to be allowed as well, but worth a try experimenting with it). Something along these lines.

Does this make sense?