We are currently running a multi-container django based application on a balena raspberry pi. Everything is running nicely but we now want to be able to add a valid SSL certificate to the NGINX host on the server.
I’ve got a valid SSL certificate working fine and DNS registered to a public endpoint without a problem. So i can go to device1.publicname.com and get to the device ( as long as I’m on the same local network)
But the next bit is that these devices don’t always have internet access but the django app can still run without internet. So I’d like to have the dns public name locally resolvable for people who connect via the devices local wifi hotspot.
Is it possible to add a host file entry or modify the dnsmasq config on the balena host so that I can add a localhost/dnsmasq server entry in to match with the public name?
Both the device name and the public hostname are provided through Environment variables so it does need to be dynamic update ideally
I am not aware of a way of directly modifying the host OS dnsmask configuration files, but you could run your own DNS server / resolver in an app container. I’ve found a documentation paragraph under the heading Using DNS resolvers in your container that anticipates this possibility, simply warning that such a DNS server/resolver should not be bound to 0.0.0.0.
If you had such a server, you might find that it was not really necessary for the host OS to be able to resolve your own domain names. But if you still needed / wanted to have the host OS resolving your own domain names in the absence of a working internet connection, I can think of a potential solution. It is possible to configure the upstream DNS servers that the host OS dnsmasq forwards queries to:
By editing the dnsServers entry in config.json as described on the meta-balena README.
Or through the DBUS interface described in dnsmasq DBus-interface and balena’s Changing the Network at Runtime (the latter focuses on NetworkManager, but the principles should apply to dnsmasq too – I can at least confirm that /etc/dnsmasq.conf enables the DBUS interface).
I understand that dnsmasq's behaviour when multiple upstream DNS servers are configured is to send queries to all of them at once, and use the reply that is received first. So the solution would be to add your own DNS server / resolver (running in an app container) as an upstream server for the host OS dnsmasq. When the the internet connection was down, the host OS dnsmasq would query all available DNS servers but only your local server would provide a valid or timely reply, which would then get used. Your own DNS resolver could either be configured with valid upstream servers (to resolve queries other than your own domain names), or configured to refuse queries for any domain names other than your own.
Just tried it out ( minus the really long delay ) and that seems to have done exactly what I was after.
The setup has a couple of steps but it works pretty well
A consul docker container which acts as the DNS server and it can be dynamically updated via the HTTP API and curl ( This does mean that you have to have .service. between the host and the domain name but that’s not so bad)
A certbot container which performs a couple of tasks
Runs the dnsmasq commands which updates the dnsmasq config to forward our application dns domain to the local consul DNS service
Then on a regular interval:
Updates the consul service registration with the latest IP address of the device ( we run the devices on DHCP networks without leases so the device IP can change )
Updates an AWS Route53 Zone with the same IP ( allows for people to connect to to the device endpoint when they are clients on the same DHCP network as our device )
Happy to share the scripts if it helps anyone else
Thanks a lot for sharing this! We always love seeing different use cases! Let us know if you have more experience running this, maybe we can learn something that can be added to our balenaOS toolkit as well!
@wwalker just want to say thank you - this is a very elegant solution to updating host DNS that came in handy for us. Our use case is that we have a container that establishes VPN connections for all containers, and as such needs to update the host DNS. Previously we were having each container look for a shared shells script to update their own DNS records when the VPN was established; your solution is a lot cleaner.