I’ve been asked to step in here to see if I can help a bit more. I understand the concept, you’d essentially like some service discovery for balena devices for a single app like K8’s use of nginx (but not actually using K8).
There’s a few different scenarios that might occur here, a really fundamental question is “Are all the devices on the same network?”. This is important, because if they aren’t then this has extra ramifications for public accessibility of devices and also security issues. I’m going to assume all devices are on the same network, if this isn’t the case we can get into some potential solutions for other cases. Before we start, it’s important to note that currently, public URLs by default are insecure and support only HTTP (this is due to change in the future, however).
In this case, we’d have two applications:
- LoadBalancer - An application with a public URL, where all traffic for the service arrives; a local ingress
- Workers - The worker application, which contains all the devices which carry out the work depending on the request being processed
In this scenario, for the LoadBalancer application, I’d create a small app based on HAproxy (although obviously other load balancers exist ), which also runs a local endpoint called ‘/register’. This endpoint would be used by all devices in the Workers application to register with the LoadBalancer application to provide their services as a working backend.
For the LoadBalancer app:
- Create a Dockerfile that includes HAproxy
- Write a small listener service that exposes a ‘/register’ endpoint to the local network, which devices running the Worker application register with
- Broadcast the IP address of the LoadBalancer device across the local network so that Worker devices know the address and endpoint to register with
- Have the listener update the HAproxy configuration when a Worker device registers itself, using the IP address source and restart HAproxy
This would allow every Worker device to then register itself as a backend with the LoadBalancer application, and update the HAproxy config. You’d end up with a config looking something like:
# The frontend takes all the traffic from the public URL
server worker1 192.168.1.10 check port 80
server worker2 192.168.1.11 check port 80
check port here as a very basic healthcheck to make sure that the device for the Worker application is working before we send traffic onto it. You can obviously change the config depending on how you want to schedule Worker devices and what healthchecks you might need.
This should be a pretty simple setup to get you going, although there may well be some code out there that does this kind of thing (I’ve not looked!), so it’s probably worth a search.
Obviously if your setup is different, get back to us and we can think about another way of approaching this!