Dbus balena block configuration

I am trying to get up and running with the dbus balena block but facing some issues

We have a dbus config file that needs to be placed at /etc/dbus-1/system-d - and currently that is done using a custom device type Nebra-HNT

With the new dbus block and due to wanting to add other hardware support (ideally without needing custom device types for each) we wanted to get the dbus balenaBlock running.

Working on a customised container to bring in the custom config file at https://github.com/NebraLtd/hm-dbus

The issues I’m having are as follows:

Firstly, when running with DBUS_SYSTEM_BUS_ADDRESS=tcp:host=dbus-system,port=55884 I get an error saying “no address associated with hostname dbus-system:55884”

However when running using UNIX domain sockets, I get the issue “/var/run/dbus/pid already exists”. Deleting this file gets the dbus-system setup working, however the other containers still aren’t able to connect to it and pick up the custom dbus config file, but it’s not ideal to have to delete this every time. Maybe the removal of this file from the persistent storage can be incorporated into the entry.sh script?

Additionally using this implementation breaks functionality of bluez - I get error messages saying The name org.bluez was not provided by any .service files. Adding in the bluetooth.conf file directly to this container does not work either… It then shows a message unknown group "Bluetooth" in message bus configuration file

Do you have any tips or examples of the dbus block in an actual project?

Essentially all we want to do is inject this one custom dbus config file into the host OS but leave all other dbus setup the same.

Hey Aaron,

Firstly, when running with DBUS_SYSTEM_BUS_ADDRESS=tcp:host=dbus-system,port=55884 I get an error saying “no address associated with hostname dbus-system:55884”

Are you using network_mode: host by chance? I don’t believe hostname resolution works with this setting. If you are using it, you would need to connect to localhost at the exposed port instead of the hostname of the container running your DBus instance. It might help if you could share your docker-compose file.

However when running using UNIX domain sockets, I get the issue “/var/run/dbus/pid already exists”.

It sounds like you’re using the system config, which specifies the pidfile at that location. Can you try the session config (just leave the DBUS_CONFIG var unset, it’s the default) instead? Alternatively, just remove the pidfile line from the config, it’s really not needed in a container.

Additionally using this implementation breaks functionality of bluez - I get error messages saying The name org.bluez was not provided by any .service files.

BlueZ is not broken, it’s not connected to your bus. BalenaOS runs an instance of BlueZ by default that connects to the host OS system bus, but you can stop this service (using the systemd dbus interface, by connecting to the host OS DBus instance) and run your own instance of BlueZ connected to your own bus in a container if you choose. Think of DBus as a daemon that provides an IPC mechanism. It’s not a singleton. You can run ten busses on a single device with any combination of services connected to each one. If you want to talk to BlueZ, you need to talk to the bus it’s connected to. If you want BlueZ to be accessible on your containerized bus, you need to stop the host OS instance to free up access to the hardware, and run it yourself connected to your bus.

Additionally, you likely don’t need the locked down system bus configuration for your application. The session and system busses work identically, but the system bus has a configuration that prohibits arbitrary processes from owning names (to prevent a random, unauthorized process running on your machine from, for example, doing a MITM on systemd, or some other important process exposing a systemd interface). In a container with restricted access to the bus socket, this is almost certainly not an issue.

If you need to debug a bus, or see what’s connected to it, I recommend using dbus-send.

We do have a demonstration project that runs several services required by the Helium gateway-config application, and communicating over DBus. This includes connman and BlueZ. You can see that here: https://github.com/balena-io-playground/balena-gateway-config

You can also see another great example of socket communication over shared volumes with the xserver block, and my balena-steam project. The latter example is a work-in-process, pulse doesn’t do anything presently, but the xserver block works.

Let me know if this helps, or if I can clarify anything.

1 Like

By the way, in case it wasn’t clear, your config to allow that service to own the com.helium.Miner name is no longer necessary if you use the session config when you start your bus. You can still point applications expecting a “system” bus to it, they work the same.

Also, I’ve done some work on a blog post explaining all of this. It’s not complete yet, but it should clarify a lot.

1 Like

@jakogut that is awesome. thanks so much for the super detailed reply.

Yep - was using host networking…so that explains that.

We are using SystemBus() calls in python, which is why i went for the system bus, however it sounds like perhaps this is not necesary and we should switch to SessionBus() instead.

For the multiple dbus setups - is that true for both the system bus and session bus? You can run multiple system buses as well?

FYI here is the docker-compose.yml

We are using SystemBus() calls in python, which is why i went for the system bus, however it sounds like perhaps this is not necesary and we should switch to SessionBus() instead.

So by default, the system and session busses are configured differently (more strict on the system bus), and they have default unix socket paths they can be connected to at. However, the DBus specification [0] requires clients to allow their system/session bus addresses to be overridable using the DBUS_SYSTEM_BUS_ADDRESS and DBUS_SESSION_BUS_ADDRESS environment variables. Consequently, you can startup a bus instance with a session config (or any config you want, really), and tell clients their system or session bus is that socket. Telling your application to create a connection to the system or session bus doesn’t really do anything besides default to a different connection address.

[0] D-Bus Specification

By the way, I’m happy I could help. I spent a good deal of time making sense of all of this, so if I can help clarify anything, feel free to ask.

I just realized the balena-gateway-config repo I linked was out of date. The most recent code was in a branch, so I’ve merged that to master. Take another look for an updated example using unix sockets over shared volumes, and less privileged containers.

@jakogut I have almost got this working, but still having one issue.

Getting connection refused to localhost:55884 for dbus…see below:

 gateway-config  Traceback (most recent call last):
 gateway-config    File "/opt/gatewayconfig/bluetooth/characteristics/add_gateway_characteristic.py", line 53, in WriteValue
 gateway-config      miner_bus = dbus.SessionBus()
 gateway-config    File "/opt/venv/lib/python3.7/site-packages/dbus/_dbus.py", line 213, in __new__
 gateway-config      mainloop=mainloop)
 gateway-config    File "/opt/venv/lib/python3.7/site-packages/dbus/_dbus.py", line 102, in __new__
 gateway-config      bus = BusConnection.__new__(subclass, bus_type, mainloop=mainloop)
 gateway-config    File "/opt/venv/lib/python3.7/site-packages/dbus/bus.py", line 124, in __new__
 gateway-config      bus = cls._new_for_bus(address_or_type, mainloop=mainloop)
 gateway-config  dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoServer: Failed to connect to socket "localhost:55884" Connection refused
 gateway-config  2021-09-23 03:41:29;DEBUG;Read Add Gateway

And here is docker-compose (not sure if it is relevant but am running the miner on session bus but bluetooth on system bus):

And this is the add gateway characteristic python script:

You can ignore my last message - I realised I needed to expose the port for dbus due to the host networking.

Thanks for all your help :+1:

1 Like

Glad that you make it run @shawaj good job!

yep - means we don’t need custom device type anymore - and we can support other manufacturers devices on balena out of the box. Pretty sweet!

Just for future reference, I had to add the dbus-wait.sh script to miner, config and diagnostics containers so that they only start after the dbus container is ready

1 Like

actually, i seem to be getting an error when trying to read diagnostics through the helium app @jakogut

 gateway-config  2021-09-23 14:28:22;DEBUG;Read diagnostics
 gateway-config  DEBUG:gatewayconfig:Read diagnostics
 gateway-config  2021-09-23 14:28:22;DEBUG;Diagnostics miner_bus
 gateway-config  DEBUG:gatewayconfig:Diagnostics miner_bus
 gateway-config  2021-09-23 14:28:22;ERROR;Unexpected exception while trying to read diagnostics
 gateway-config  Traceback (most recent call last):
 gateway-config    File "/opt/gatewayconfig/bluetooth/characteristics/diagnostics_characteristic.py", line 32, in ReadValue
 gateway-config      self.p2pstatus = self.get_p2pstatus()
 gateway-config    File "/opt/gatewayconfig/bluetooth/characteristics/diagnostics_characteristic.py", line 42, in get_p2pstatus
 gateway-config      miner_bus = dbus.SessionBus()
 gateway-config    File "/opt/venv/lib/python3.7/site-packages/dbus/_dbus.py", line 213, in __new__
 gateway-config      mainloop=mainloop)
 gateway-config    File "/opt/venv/lib/python3.7/site-packages/dbus/_dbus.py", line 102, in __new__
 gateway-config      bus = BusConnection.__new__(subclass, bus_type, mainloop=mainloop)
 gateway-config    File "/opt/venv/lib/python3.7/site-packages/dbus/bus.py", line 124, in __new__
 gateway-config      bus = cls._new_for_bus(address_or_type, mainloop=mainloop)
 gateway-config  dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoServer: Failed to connect to socket "localhost:55884" Connection refused
 gateway-config  ERROR:gatewayconfig:Unexpected exception while trying to read diagnostics
 gateway-config  Traceback (most recent call last):
 gateway-config    File "/opt/gatewayconfig/bluetooth/characteristics/diagnostics_characteristic.py", line 32, in ReadValue
 gateway-config      self.p2pstatus = self.get_p2pstatus()
 gateway-config    File "/opt/gatewayconfig/bluetooth/characteristics/diagnostics_characteristic.py", line 42, in get_p2pstatus
 gateway-config      miner_bus = dbus.SessionBus()
 gateway-config    File "/opt/venv/lib/python3.7/site-packages/dbus/_dbus.py", line 213, in __new__
 gateway-config      mainloop=mainloop)
 gateway-config    File "/opt/venv/lib/python3.7/site-packages/dbus/_dbus.py", line 102, in __new__
 gateway-config      bus = BusConnection.__new__(subclass, bus_type, mainloop=mainloop)
 gateway-config    File "/opt/venv/lib/python3.7/site-packages/dbus/bus.py", line 124, in __new__
 gateway-config      bus = cls._new_for_bus(address_or_type, mainloop=mainloop)
 gateway-config  dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoServer: Failed to connect to socket "localhost:55884" Connection refused
 gateway-config  ERROR:dbus.service:Unable to append (None,) to message with signature ay: <class 'TypeError'>: 'NoneType' object is not iterable

I also get a similar error when the container starts (using the dbus-wait.sh script mentioned above):

Failed to open connection to "session" message bus: Failed to connect to socket "localhost:55884" Connection refused
 gateway-config  DBus is now accepting connections

the ports are exposed and the miner and diagnostics containers seem to be able to communicate with dbus fine.

1 Like

I originally had:

    expose:
      - "55884"

But changing to:

    ports:
      - "55884:55884"

Seems to fix things.

Is there a reason I’m missing why ports works and expose doesn’t?

Hey Aaron,

It looks like this is because expose opens a port for other containers to access, without opening it for the host, but ports maps ports from the container to the host. The host needs access to the port when you use network_mode: "host" because you no longer have any network isolation.

For future reference, it’s actually a lot easier (and safer, and faster) to expose local services over UNIX domain sockets when possible, and expose the socket to other containers through a shared volume. This removes overhead from the network stack (UNIX domain sockets have ~50% greater throughput), simplifies access, and removes the need for configuring port mappings.

1 Like

Is it possible to have the system bus listening on unix:path=/host/run/dbus/system_bus_socket using io.balena.features.dbus: '1' and the session bus listening on, say unix:path=/shared/volume/session_bus_socket using the dbus balenaBlock and a shared volume?

And just to clarify, you are saying that the performance of that will be better than the performance of having it on the standard tcp:host=localhost,port=55884 that’s the default in the dbus balenaBlock?

Hey Aaron,

Is it possible to have the system bus listening on unix:path=/host/run/dbus/system_bus_socket using io.balena.features.dbus: '1'

Yes, but I’ll also clarify that this only gives you access to connect to the already running host OS system bus, without any policies for owning names. This is most useful for using the built-in BlueZ service (if that works for you), or managing running services through the systemd DBus API, etc.

and the session bus listening on, say unix:path=/shared/volume/session_bus_socket using the dbus balenaBlock and a shared volume?

Yes, you can do this. You can also have two system busses, one running on the host OS, one running containerized. Refer again to the balena-gateway-config application for an example on this. Specifically, the connman container connects to the host OS system bus to talk to NetworkManager and unmanage a specific network interface so that connman can manage it instead. Likewise, the BlueZ container connects to the host OS system bus to talk to systemd and stop bluetooth.service, so the containerized BlueZ instance can manage that hardware.

And just to clarify, you are saying that the performance of that will be better than the performance of having it on the standard tcp:host=localhost,port=55884 that’s the default in the dbus balenaBlock?

Yes, UNIX domain sockets aren’t routed, so the overhead is lower compared to connections over the loopback interface. Additionally, they are subject to filesystem permissions, which allows more fine-grained control over which users can open and use them. Generally speaking, any services that are only used on the same machine should prefer to communicate over domain sockets rather than IP sockets over the loopback interface.

1 Like

That is working well now, thanks very much!

The only issue I now have, which is somewhat unrelated is:

 gateway-config  ERROR:dbus.proxies:Introspect error on :1.6:/: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NotSupported: org.freedesktop.DBus.Introspectable.Introspect
1 Like

Hey Aaron,

What call are you making when you get this error?

Hey @shawaj did you solve this issue?