Client Can't Find Server In Multi-container Build

I’m trying to build a mutli-container fronted client based on React using ngeinx to serve the static assets and a Node API server running express which needs to access a CAN interface via socketCAN. I have tried various combinations of host and bridge network modes, exposing ports, etc but the client can’t seem to find the server.

Quick changes for testing have been hampered by balena push local builds times in excess of 9 mins for the client. There seems to be a large number files being complied as there is a lot of gyp prints to the console.

The docker files are below, so here are my questions to start:

  1. My understanding is since the server needs to access the CAN interface it needs to run in host mode, correct? If so how does the client makes API calls to the server, localhost didn’t work? This seems different than what is discussed in the Balena Services Masterclass

  2. How can reduce the client build time, 10 mins is too long for rapid iterative development? On thought I has was to pre-build the React static assets and my dev machine and then use them in the container? Is this an option?

Also the server is currently set to port 8080 and the Nginx config file is;

server {
  listen 80;
  
  location / {
    root /usr/share/nginx/html;
    index index.html index.htm;
    try_files $uri $uri/ /index.html =404;
  }
  
  include /etc/nginx/extra-conf.d/*.conf;
}

Thanks,

Here is my docker-compose is:

version: "2.0"

services:
  can-init:
    build: "./can-init"
    network_mode: host
    cap_add:
      - NET_ADMIN
    restart: no
    environment:
      CAN0_BITRATE: 250000
  client:
    build: "./client"
   network_mode: host
    depends_on:
      - server
  server:
    build: "./server"
    network_mode: host # expose host network adaptors to the container directly

The client Dockerfile is:

# Stage 0, "build-stage"m based on Node.js, to build and complie the frontend
FROM balenalib/raspberrypi4-64-node:12-build as build-stage
WORKDIR /app
COPY package*.json /app/
RUN npm install
COPY ./ /app/
RUN npm run build

FROM arm64v8/nginx:1.21.3-alpine
COPY --from=build-stage /app/build/ /usr/share/nginx/html
# Copy the default nginx.conf provided by build-stage
COPY --from=build-stage /app/nginx.conf /etc/nginx/conf.d/default.conf

This is the server Dockerfile:

# Build stage
FROM balenalib/raspberrypi4-64-node:12-build as builder

WORKDIR /app

RUN install_packages iproute2 can-utils

COPY . .
RUN npm install
RUN npm run clean
RUN npm run build-server

CMD ["node", "-r", "dotenv/config", "./lib/index.js"]

@westcoastdaz, For development, I would recommend using local mode to avoid having to rely on our build service as you go through various iterations and troubleshooting steps.

I’m not as familiar with CAN interfaces though, so I’m reaching out to some colleagues who might have better insight for you on that piece.

@the-real-kenna

I am using local mode to build the above multi-container solution and I’m still getting the long build times.

Thanks,

Hi Darren,

My understanding is since the server needs to access the CAN interface it needs to run in host mode, correct?

I haven’t used SocketCAN myself, but doing a little bit of digging:

The SocketCAN concept extends the Berkeley sockets API in Linux by introducing a new protocol family, PF_CAN, that coexists with other protocol families like PF_INET for the Internet Protocol. The communication with the CAN bus is therefore done analogously to the use of the Internet Protocol via sockets.

So it looks like you are correct, the CAN bus is accessed over the local network stack. Consequently, your application would require host mode networking to access this interface.

If so how does the client makes API calls to the server, localhost didn’t work?

Localhost, if I understand correctly, would employ IP sockets, which is not what you’re looking for. You want to create a socket using the PF_CAN protocol family, bind that to your CAN interface (should be something like can0), and read()/write() to that socket. Wikipedia has a helpful demonstration under the Usage section: SocketCAN - Wikipedia

As far as local build times, it looks like you have two containers you’re building. Is one container in particular taking more time to build than the other? Have you looked into livepush?

@jakogut

Sorry for the delay in my reply. Ultimately what I’m trying to build is something like this which is from the Services Master Class.

Except the CAN needs to be connected to the server or backend in this image above. I have the CAN working but the frontend can’t reach the server at all from within engine Nginx.

Thanks,

@jakogut

Any thoughts on my previous post?

Thanks,

Hey Darren,

Didn’t notice your ping earlier, sorry about that.

Except the CAN needs to be connected to the server or backend in this image above. I have the CAN working but the frontend can’t reach the server at all from within engine Nginx.

To clarify, you have a service that you want to talk over the CAN bus from, correct? When you say you have the CAN bus working but the frontend can’t reach the server, do you mean you have a container that’s talking over the CAN bus, but it can’t communicate with another container?

If you’re using network_mode: host, that container will not have networking handled by Docker/Balena engine, which includes DNS and exposing ports. If you want to communicate with a container using host mode networking, you need to connect to localhost, as there’s no virtual network being employed.

Additionally, a docker-compose config might help clarify some things, even if it’s just an example.