Balena container not starting after modification in Volume

Hi,

I made a modification to a container volume from:

volumes:
  nimble_cache:
    driver: local
    driver_opts:
      type: tmpfs
      device: tmpfs
      o: "size=20g"

To:

volumes:
  nimble_cache:

And my service won’t start anymore… If I inspect the volume, is the same as before the change:

root@40623109217a:~# balena volume inspect 1508530_nimble_cache
[
    {
        "CreatedAt": "2020-02-06T14:31:40Z",
        "Driver": "local",
        "Labels": {
            "io.balena.supervised": "true"
        },
        "Mountpoint": "/var/lib/docker/volumes/1508530_nimble_cache/_data",
        "Name": "1508530_nimble_cache",
        "Options": {
            "device": "tmpfs",
            "o": "size=20g",
            "type": "tmpfs"
        },
        "Scope": "local"
    }
]

What’s the proper way to fix this without losing data? Probably it will get fixed by deleting the volume, but not losing the data there is important…

Is there a way to make deployments work even if as a result the data is lost?

Thanks in advance

Hi,

Unfortunately, tmpfs volumes lose their content when the container is stopped. See: https://docs.docker.com/storage/tmpfs/. The volume is still there, but it is empty.

Hi @Ereski,

I know is a broad question, but is there a way to limit volume size in Balena?

Regards,

I am fairly sure not @eeb, but I’m asking internally to make sure and will update you when I have a more definitive answer.

Thanks @Ereski, the question is based that we noticed that when we run out of space in Balena, the device started to malfunction… To be more precise, the resin-supervisor container was not starting until we deleted all the data from one of the volumes. We are planning to use that volume for video recordings and we need to use as much space as we can.

Regards,

@eeb what balenaOS and supervisor versions you are running?

Hi @Ereski,

We are using Supervisor Version -> 10.6.27 and Balena OS -> balenaOS 2.46.0+rev1

Regards,

Unfortunately, that is not supported at all. Under the hood, volumes use bind mounts which does not support quotas.

My suggestions would be to embed some kind of space manager in your application so that it makes sure not to fill the whole card.

Hi @Ereski,

Thanks for letting us know! And is there a reason why the resin-supervisor container will not start when running out of space? Is there a disk usage threshold for the resin-supervisor not start?

I am not familiar with the details of the supervisor, so I cannot give you a definitive answer. The supervisor also runs as a container, and the engine definitely needs some extra space while running.

We don’t currently have a recommended minimum of disk space that should be free, but it shouldn’t be much for normal operations.

Thanks @Ereski!

No problem @eeb. Give us an update if you need anything else.

Hi @Ereski

I know I’m answering an old post but I have seen the same behavior as described above. Supervisor container not starting because of filled storage. This was with more recent versions of the host OS and supervisor.
Is there any way of preventing this problem yet? Or is your advise still to monitor it myself and preserve space this way?

Kind regards

1 Like

Hi @robeg,

Yes at the moment we recommend monitoring the system resources yourself. Currently we don’t have any meaningful actions that we can take automatically that will guarantee a reliable system state if we detect filled storage. I realize it’s probably not the answer you wanted to hear, but we’re open to suggestions, if you have any.