NTP server IP configuration with Static IP unit

Hi,

I have been reading through several posts on the forum but it is still not clear to me if this is possible and what is the best approach for my case.

Our remote units come with a default code version flashed from factory.
We typically modify network configurations using a web application that uses behind it the NMCLI command line application.
We currently need to have a Static IP configuration and a custom NTP ip configured.

What is the best approach to change the configured NTP default IPs to the new NTP IP?

From what I read the best remote way would be to add a RESIN_NTP_SERVERS variable to the device configuration on the platform. This would change the key “ntp_servers” on the config.json file used at boot.

For now the NTP configuration is still not feasible from an application container?

Thank you for the help

Hi,

If I understand right, you want to update the configuration on a remote device. I will suppose here that the device has access to balenaCloud at configuration time.

As you want to set a static IP, I will assume that this procedure will be done manually and that you are looking for a configuration which will stick even when the device will reboot. So I will propose an option with SSH interaction with the HostOS.

First of all, if you need to set a static IP for your device, you can do so by editing the configuration file in system-connections from the resin-boot partition of your system (you can access this folder in /mnt/boot/ on HostOS terminal. The static IP configuration is explained in our networking documentation. In my test product this file is called balena-wifi-01. HostIS has a built-in vi editor if you want to edit the file.

You can configure the NTP server by adding the ntpServers field to the config.json file on the same resin-boot partition (e.g. "ntpServers":"pool.your_ntp_domain.com") according to the configuration documentation.

Please note that updating the config.json file will reboot your system.

Let us know if this solution works for you.

Thanks,
Aurélien

Hi @wolvi-lataniere ,

Thank you for your answer.
I think I explained myself incorrectly.

We currently already configure any kind of network setup using NMCLI (static or dhcp or other specifics).

My doubt refers to NTP mainly. I only mentioned static network case because I thought that might affect the NTP configuration in some case or way (I read somewhere, something…).

So in this case my main issue is, how can I change the NTP servers remotely of a unit that is already running for days.

From what I know your suggestion:

You can configure the NTP server by adding the ntpServers field to the config.json file on the same resin-boot partition (e.g. "ntpServers":"pool.your_ntp_domain.com" ) according to the configuration documentation.

can’t be achieved remotely you need to have direct access to the unit, or am I wrong?
Any ideas?

Best regards
LS

Hello,

I’m not sure about already deployed devices, but my approach for a similar use-case (change NTP configuration of remote devices) is to re-write the chronyd.conf file after the initial flash (found in the resin-rootA partition). Indeed, you should find something along the lines of ExecStart=/usr/sbin/chronyd -d near the end, but the chrony daemon actually has an -f option that allows you to point to a configuration file different from the /etc/chrony.conf default.

You could make that configuration redirection point to a file in your container’s persistent memory, e.g. ExecStart=/usr/sbin/chronyd -d -f /var/lib/docker/volumes/<app_id>_resin-data/_data/<my_custom_chrony_path.conf>, and then from your container do modifications by changing the /data/<my_custom_chrony_path.conf> file directly.

This is interesting because you can change not only the servers, but also other chrony options.

Hope this helps, it was a big headache of mine back in the day! :smiley:

Best regards,
Marc

Hi @Marc ,

Thank you for the time you took to put this option in writing.
I will test your idea.

This is kind of an important thing for us. Being able to change configurations (like NTP) remotely using our units web interface.

Meanwhile if someone has more ideas, just shoot.

Thank you

2 Likes

Hi @Marc,

One quick question how did you sistematize this across machines.

I mean, the approach seems to work well, but for our solution I need to find a way to run automatically the -f option at boot on the HOST OS (balenaOS) across all devices. I don’t want to do this by hand every time.

Did you had to do something like this too ? If yes what was your approach?

Best regards

Hi!

Thank you for your reply.
This makes sense although, as asked in the previous post,

a) Do you, after the first flash, manually add the config line
“ExecStart=/usr/sbin/chronyd -d -f /var/lib/docker/volumes/<app_id>_resin-data/_data/<my_custom_chrony_path.conf>” ?

If so, in which init/startup file?
And also, do you manually check the <app_id> to insert in the that config line?

b) If not, how did you manage to automate that configuration?
Did you include that config line in the docker-compose?
If so, how, in that case, how did you discover, beforehand, the <app_id> to include in the config line?

Sorry to ask so many things at one go…
I’m also investigating some of these questions

Thanks

Hi all

Meanwhile I found the file that configures que chronyd at start time.
In our version of balenaOS [balenaOS 2.105.1+rev1] the file is located at

/etc/systemd/system/chronyd.service.d

(maybe it was obvious but not for me…)

To be able to edit this file I had to do 2 previous steps:
root@…:/etc# mount -o remount,rw / (mount the partition in writable mode)
root@…:/etc# chmod o+w chronyd.conf (give write permissions to the file)

And then edited the file

ExecStart=/usr/bin/healthdog --healthcheck=/usr/libexec/chrony-healthcheck /usr/sbin/chronyd -d -f /mnt/data/docker/volumes/1234567_sqlite3/_data/chrony.conf

(being 1234567_sqlite3 the id_VolumeName)

Now there are two questions remaining:

  1. Is there a way while building the image to include these changes to the chronyd.conf file? For instance in the docker-compose.yml

  2. And if so, how can we know the id in <id_VolumeName> dinamically ?

Thank you in advance
RP

Hi, just catching up with this thread, I’d like to make some comments.

  1. Modifying files in the hostOS is not supported. These changes will be lost on hostOS update, and in future we will very likely will not allow the rootfs to be remounted as writable.
  2. The way to configure the NTP servers is by modifying config.json in the boot partition. This should be exposed via the supervisor API so that it can set from the application or balenaCloud, however this work is pending.
  3. At the moment, the recommended way is to log in via SSH into the hostOS and perform the change, in a similar way to how configizer does it.
  4. An alternative that I like less is to mount the boot partition from the application and modify config.json directly.

Please note that writes to the boot partition are non-atomic as it is a FAT filesystem. balenaOS uses a dedicated fatrw utility to make safe writes to the boot partition, but this is still not used by the configizer tool I mentioned above.

Hi @Marc, @alexgg

Thank you for the relevant suggestions and clarifications (also mentioned in the thread Running commands at boot or after boot on HOST OS - #4 by suporte)

Based on your input we implemented the following workflow

  1. at build time we install chrony in the container (dockerfile)
  2. in each reboot:
    2.1) stop hostOS chronyd from the container (via D-Bus message)
    2.2) launch chronyd -f <path_to_persistent_folder/chrony.conf) in the container
    2.3) any updates to the aforementioned file will fuze a reboot and the workflow restarts at 2)

This seems to be working most of the time, although sometimes, after rebooting, the container has trouble running the chronyd -f

(container)# chronyc sources
506 Cannot talk to daemon

We are trying to understand this behavior

Hello

Following this discussion, we implemented the above mentioned workflow:
when the container is started, it stops Host’s chronyd and starts its own chronyd.
During implementation and testing we were suggested (by chatGPT bot) to include the line

–cap-add=SYS_TIME

“to give the container the ability to modify the system clock which is required for chrony to function properly”

Our question:
Is that really necessary?
Even not including this suggestion, the system time seems to be synchronizing well.

Our app container is **

privileged: true

If you can throw some light on this topic, we appreciate
RP