Hi,
I’m testng balena in order to migrate around 100 devices but it seems to be no more suitable for our needs.
To resume we have :
- A central server, witch sychronize and store files and DB from a Worpress site, maintained and developped by our customer. This server is a VPN server too
- Intel NUCs in customer’s stores connected to touchscreen, witch provides a kiosk application (the wordpress site), connected to central server by VPN
- On each NUC, a docker stack with a wordpress and mariadb container
The workflow :
- When the site dev finished an update, it trigger the script witch make an rsync between his wordpress server and central server
- Each night, all NUCs reboot. At boot, they do a rsync with the central server
- Once upon rsync finished, the docker stack is launched, and Chromium is started in kiosk mode
To make it working, it’s lot of scripts, linux systems and softwares on central server to maintain (Rundeck, Wireguard, WebUI for dashboard, etc.), it’s messy… And provisionning is painful…
So I tried to work with balena and… Why the hell a “management platform” didn’t give possibility to interact with the host? If I can manage a task in order to make a rsync on the host, it would be sooooo easy, but not. With container, I have to find a way to say to mariadb “hey stop, don’t start, there a rsync in another container” before launching db gear.
When starting NUC, all containers start :
- Browser block : seeing a nice timeout page (yes, because Wordpress is not started yet, it wait for DB
- Rsync start, but DB too (depends_on just watch started container so when rsync one start, db start just after. Tried to add endpoint to db, but it make it loop)
- When db is ready, Browser show a “Database Error”, because rsync is not over, and db is not pushed…
And other things : reboot a node is something simple in my opinion, if we can SSH to host, we can send commands (and why not schedule them). Even, a cron job wich restart host is more simple that building a container in order to interact with the dashboard API to send a reboot order to the host.
So is balena for me? How can I deal with rsync and reboot ?
Note : having just a kiosk to a Internet website is not a solution, the customer want to have kiosk working, even if Internet is off.
Thank you and have a nice day !
Hi, a comment about the way you are approaching the solution. Balena comes with some restrictions to provide more stability and security, especially useful in remote nodes.
The main aspect to consider is the read-only OS. Things like configuring a crontab or network configuration have to be done via a container. I know it sounds painful, but the reason is that we want a stable OS that can start when there are other problems, so we are preventing making configurations that can be harmful. In return, we are making it as simple as we can to interact safely via containers.
In your case, you are finding that your application doesn’t start properly because you haven’t adapted your solution to how balena works. Here are some suggestions to start with:
- rsync-db sync: As a quick start I’d use a token file in a shared volume. The rsync script would create this file and remove it when done. Modify the db script to not start if the file is there.
- reboot programatically: Instead of interacting with the backend API, you can interact with the supervisor API with just an http call. This way you can reboot locally without internet.
Besides, another commment for your consideration to get the most our of balena: I don’t have information as to why you need to reboot the NUC, but you might want to reboot only the containers themselves if you need to, using the supervisor API.
We have several other users with the same use-case as yours, they do use balena, and apparently, they are happy with it. If you’d like to have a chat to dig into our solution further, please contact us via the web form and I’ll get back to you.
1 Like
Hi,
Thank you for responding.
Yes, I understood after that I’m thinking like a Sysadmin, and not like a balena user, which is radically different.
After reading lot of forum’s messages and docs, I finally get it work with what you say ;
- Rsync container (called “core”) with connexion information to central server and shared volume. Also an API access to talk to Supervisor and starting the other containers (which have a statement “restart:no” in yml) and shutting himself after rsync completion.
- Browser block configured with DBUS access in order to talk to Host and schedule a nightly reboot via cron, which make “core” to restart automatically, due to “restart: always” in yml
So, actually, it’s working well. It can be difficult to approach this kind of ecosystem because it’s not working like a “classic” one, and it’s not really working like pure docker.
I didn’t see in the docs that the objective was to have a read-only system. Now I understand, even if, like you say, “t sounds painful”.
Now, there is one thing that I don’t understand : it seems that rsync is always synchronize ALL files, not modified ones. This means a much longer start-up time than with the previous solution.
Is there anything I’ve missed in the behavior of rsync via containers on a shared volume?
1 Like
Hey, happy to know you made it work. I hope you see the benefits of how we are doing things… we do believe is the right way. Just FYI we are working to simplify the learning curve and make things easier to grasp, specially the differences with classic linux or docker.
Regarding rsync, you are right there shouldn’t be any change of behavior, Maybe check the flags that are considering the timestamps? Internally, the behavior of a shared volume is the same as in any linux distro with docker so tht shouldn’t affect, but I’ve never dealt with this.
1 Like
Hi,
Yes, I think I have to deal with “TZ” variable on all containers who are using the shared volume.
I can see other reason why it’s less than one minute on a bare-metal Linux, and up than five minutes in a container.
Thank you
1 Like