We are currently running a multi-container project where some of the containers rely on avahi to resolve hosts on the outside.
We know that the supervisor runs an avahi-daemon instance as well, so we can reach the machine via <short-uuid>.local from the outside.
But since our containers also like to reach for example log-server.local, we currently have an avahi instance running inside the containers.
Although this works, we now have in principle multiple avahi-daemon instances running on a single machine. Is it possible for containers to rely on the host OS’ avahi-daemon such that we have only a single avahi instance running?
Simply sharing the host network to the containers does not seem to work.
I.e. doing ping log-server.local from within a container directly returns with Name or service not known. But when this container runs its own avahi-daemon it works fine.
Or is it not really an issue to have multiple avahi-daemons running on a single machine?
But that thread seems to have died after this suggestion which basically suggests what we’re doing now: installing an avahi-daemon in each container that requires one.
We were able to partly solve our issue. At the moment our docker-compose looks like this:
version: "2.1"
volumes:
avahi-socket-dir: {}
services:
avahi:
image: flungo/avahi
network_mode: host
volumes:
- "avahi-socket-dir:/var/run/avahi-daemon"
environment:
# Separation of responsibilities, this daemon is query-only. The hosts daemon publishes:
PUBLISH_DISABLE_PUBLISHING: "yes"
other:
image: "some_image"
network_mode: host
volumes:
- "avahi-socket-dir:/var/run/avahi-daemon"
So we basically start a single avahi container ourselves and share the socket as a volume between our services.
By disabling the publishing part of our avahi instance, we rely on the avahi instance of Balena OS to do that and this seems to work and we’re down to 2 avahi instances on a single machine compared to many.
Ideally, we believe this suggestion still would be the most elegant one though…