I’ve looked at the NFS patterns in VolkovLabs / balena-nfs and while that’s a great pattern for my personal containers, I’m having trouble abstracting that to a 3rd party container pattern.
Normally, I could mount NFS on the docker host and provide those as volumes to the docker containers. I’m using a variety of 3rd party containers and that just works.
I’ve looked at doing this on BalenaOS but it doesn’t have NFS support in the kernel.
I tried out the balena-nfs pattern using the 3rd party containers as a starting point but there are different base OS involved so I’ve got a mess of different package managers to sort in my sidecar container and I’m messing with the 3rd party container provided orchestration layer so it’s a lot.
So, unless I’m missing something, it seems like the right step forward is to build my own BalenaOS kernel.
@Mikhail, @Laird, I understand the friction, and have been giving some thought to how best to overcome the need for the NFS mounts in the first place. It is more difficult than may be expected as any solution entails quite a significant change to the ways of working. What seems most likely is including a label in containers that allows a bind mount to an empty folder, and from that bind mount we should be able to pass mounts between containers. Adding labels though isn’t something we take lightly, once they are added, they are hard for us to then change direction later, and mount naming is also a challenge. Moreover, I haven’t yet got around to testing this theoretical option to be able to start bringing in all the other devs on a brainstorm around it.
If you had ideas on how you feel this functionally could be best added based on your experience and use cases I am all for hearing it. And if interested in testing it out, here is a PR with a bind mount label idea: Add bind-mount label by maggie0002 · Pull Request #1993 · balena-os/balena-supervisor · GitHub. I haven’t tested it myself yet, but I think you would need to include the :shared addition to your mounted volume in the docker-compose file.
I must emphasise though, no time frame on this or whether the bind mount label solution will in fact be implemented, but anyone who has seen me around the forum well knows that I am all about hearing from those with the problem, I would love more people in the conversation.
As always a shoutout to @Mikhail for the solution and for elevating the problem on our radar.
@maggie0002, I like the idea of using the bind-mount. The only issue I see so far that users can overflow the redisData with their files and it will break the device functionality. Using separate volume prevent this.
We recently started using generic_x86_64 image, which partition the whole drive for system volumes and rest is going to resinData. It will work great for bind-mount. Unfortunately, there is no way to limit disk space during installation and we had to repartition ResinData volume to use NFS. In this case, using 2nd drive is a preferred way for NFS storage and it will have better performance overall.
There is pros and cons of using NFS. We also recently switched to async mode which controlled by an environment variable for performance boost and can lead to data-loss. We accept the risk and will use sync for specific use cases. NFS4 is another great feature which was added recently. Overall, after 2 months, we are happy with the solution.
As additional benefit it’s possible to mount NFS storage from another device located in the same network when building cluster and we are exploring this option now. We will share our clustering ideas in future articles.
We recently produced YouTube video to explain the NFS solution for the community.
@mpous - Yes, I tried out NFS client pattern shown in the VolkovLabs/balena-nfs repo and that’s a reasonable if heavy pattern for created containers where you have control of the base OS. It’s very awkward for using third party containers based on different package managers. For example, if I take the alpine-nginx container and want to add NFS, I need to use the Alpine package manager. Multiply that by the variety of containers in the wild and this becomes pretty messy.
@maggie0002 - Labels and bind mounts are very interesting. I’m not 100% sure it’s a match for my needs.
Overall, I was thinking that I’m new to BalenaOS and not understanding why my volume mounts like this were not working:
I figured if that isn’t supported yet, I could just NFS mount on the host and bind the containers to those local mount points.
When that didn’t work I tried out the VolkovLabs/balena-nfs patterns and while those are cool, they are really heavy and provide a lot of features I don’t need.
So, I was really hoping for someone to say “you are doing it wrong, you need to use this flag or kernel” and I could create nfs mount volumes in my docker-compose.yml and be off to the races.
You can avoid using environment parameters and update entrypoint scripts as you wish.
For Alpine to install NFS RUN apk add nfs-utils.
For Ubuntu - RUN sudo apt-get update && sudo apt-get install -y nfs-common
@Mikhail - I have Alpine, Ubuntu, and CentOS containers in my stack and it will be interesting to construct a one-liner to detect and exec the right bits. I appreciate your project for sharing this pattern.
I haven’t experimented with the volume driver options, but I suspect they may be ignored as they are unsupported at the docker-compose processing level: docker-compose.yml fields - Balena Documentation. The link you pointed to refers to balena Engine which may support the drivers, but the compose limits could be stripping out the driver configuration before it reaches the engine.
You are right that the labels option isn’t going to be much help right now, apologies for not being more help. Hopefully in the long run though the labels would allow the equivalent of bind mounting to the host. A workflow could be that you would have one container that just does the NFS mounting process (in line with good Docker practice of one process/service per container) and then the other containers can access it. Then you only need one OS type (Alpine for example) for the NFS and the containers around it can be anything you like.
@Mikhail is certainly the best one for the easing of the NFS installs across containers, I really appreciate the shell examples.
I’m going to dig a little deeper to check my theory on the incompatibility and then come back to you and we can look to create it as a request if it is in fact unsupported rather than a bug.
@Laird I’m being told that the driver_opts function should be in there already, and it is the docs that were a little out of date.
Have you tried doing the mount directly on the host rather than through the cloud? You could try creating a volume and running a container with an attached volume with the same settings from the command line of the host and see if it returns any error messages or functions from within the container:
Something like:
balena volume create --driver local \
--opt type=nfs \
--opt device=':/test' \
--opt o='addr=172.16.10.51' \
test