Multicontainer unable resolve service names

From one container we are not able to resolve another one based on service names.
We could use port nat in the docker-compose and resolve on “localhost” but different ports for different containers. But Do we have other options?

So far we have tried adding “hostname:” with the same name as service name to the service, and have also tried to have all services in ‘bridge’ or ‘host’ without this making any difference.

If it is not clear what we try to do, from service-b we would like to run command “curl service-a/index.html”

version: '2.1'
services:
  service-a:
    build: ./service-a
    network_mode: "host"
    hostname: service-a
    network_mode: "host"
    ports:
      - "80:80"
  service-b:
    build: ./service-b
    hostname: service-b
    network_mode: "bridge"
  service-c:
    build: ./service-c
    network_mode: "bridge" 
    hostname: service-c
    expose: 
      - "8080"

It doesn’t work that way. Have a look at network/aliases: https://docs.docker.com/compose/compose-file/compose-file-v2/#aliases . What this basically does is expose the container on the specified network under the specified hostname which you can use to refer to from inside other containers.

Ok, I might have it wrong, but I thought that was default behaviour by docker-compose and resolved by https://github.com/balena-io/balena-supervisor/pull/933/files

But if I then understand you correctly, this does not work unless we use “networks:” ?

version: '2.1'
services:
  service-a:
    build: ./service-a
    network_mode: "host"
    networks:
      main:
        aliases:
          - service-a
    ports:
      - "80:80"

  service-b:
    build: ./service-b
    network_mode: "bridge"
    networks:
      main:
        aliases:
          - service-b

  servicec:
    build: ./service-c
    network_mode: "bridge" 
    networks:
      main:
        aliases:
          - service-c
    expose: 
      - "8080"

I wasn’t aware the Supervisor adds the service name in aliases implicitly. Strip down config to the bare minimum – ie. remove network_mode, networks, etc. – and see where that takes you.

That is basically where we started, first we only had “network_mode” on one of the containers - the one that required to act at “host” mode, runnning a API.

The other containers then had nothing and resolution did not work between them.
We added “network_mode” in either “host” or “bridge” to all of the containers without this resolving anything.

Now on the last build, acording to your instructions we will try to add this to all of the containers and see where that takes us :slight_smile:

    networks:
      main:
        aliases:
          - service-a

This configurations takes us nowhere, i have read over the docker-compose documentation once more and it states

By default Compose sets up a single network for each container. Each container is automatically joining the default network which makes them reachable by both other containers on the network, and discoverable by the hostname defined in the Compose file.

I am not able to achieve this with balena, any advise on how to proceed?

hi @wnnb . I think the aliases key is not supported on balena. We only support a subset of the compose fields detailed here: https://www.balena.io/docs/reference/supervisor/docker-compose/#docker-composeyml-fields .

In terms of how to proceed. I would start from our basic example here: https://github.com/balena-io-examples/multicontainer-getting-started in which we have services talking to each other using the service name, via HAproxy. I would also look at removing the network_mode: "bridge" as that is not needed and also I think if you have one of the services on network_mode: "host" that service won’t be added on the default bridge so won’t be able to easily fined the others.

That was it!
Removing all “network_mode:” and “networks” from the config, leaving only “ports:” for externally accesible ports, and using “expose:” for additional ports to be exposed between the containers only solved it :slight_smile:

Thanks!

Awesome, glad that worked!