Edge IoT Mesh Project

Hi,

New to balenaCloud but have been playing with a few rpis for a few days. First of all I am very impressed by the work the balena team has put together. I have tested other OTA platforms but none come close to the easiness getting started.

I am looking for some architectural advice specific to the project I am working on.

Basically the project requires 2 or more types of nodes. Let’s think of them as SuperNodes (aka Fog Control Node) and regular Nodes (aka Fog Cell).

Regular Nodes are tasked with acquiring data and performing actions, while SuperNodes provide core infrastructure on the same VLAN.

SuperNodes are primarily high RAM, high Storage (like an RPI 4 8GB with SSD)
Nodes are any kind of lower spec devices down to an RPI Zero

If I understand balena correctly that means 1 project with 2 or more fleets one per node type.

I’d like the first SuperNode on a VLAN to establish the basic infrastructure like MQTT and eg a local InfluxDB instance amongst other things. In the future I can see HA requirements to have at least 2 or more SuperNodes per install that may require some level of coordination.

Any regular Node that joins the VLAN should be looking for a SuperNode and join the local group/mesh. In most cases the nodes will be sending sensor data into the MQTT broker.

I’d like to detect the capabilities of a Node and auto configure as much as possible. Think various HATs that enable sensors and or other IO. Does that require multiple fleets per HAT type?

Already looking at balenaDash & Sense as building block patterns.

Are there any best practices to consider or things to avoid? How do Nodes best look for a SuperNode on the local network? Have to assume DHCP and changing IP addresses on reboot. For example Is there any MQTT based registry that works well in this environment? Other fog computing patterns to consider with balena?

Sorry for the multiple questions - trying to frame out the basics.

Hi Oliver,

Interesting use case and thread! :slight_smile:

I’ll try and work through some of your questions:

If I understand balena correctly that means 1 project with 2 or more fleets one per node type.

So, you’ll need two code bases (FogControl and FogCell, let’s say) which will each have a fleet on balenaCloud with the SuperNodes and Nodes running in them, respectively. You can make the fleets with the lowest common architecture, so for instance a RPi4 will run in a RPiZero fleet. As long as your code runs the appropriate executables for each architecture, this will be fine:

  • If you are building services and deploying with balena push then you’ll want to use templating in your dockerfile: Define a container - Balena Documentation
  • if you are using a pre-built image in your docker-compose file, then you’ll need to make sure your image supports multi-arch.

As an example, our sensor block has a multi-arch manifest added to the pre-built images, and therefore supports armv7 and aarch64 as you can see in the tags here:
https://hub.docker.com/layers/balenablocks/sensor/latest/images/sha256-2758edba618849453f53a24433c4fc93fd451b56e4a9ff546a3df72449704951?context=explore

If you were to use this in your docker-compose, you would add just reference the latest tag and the appropriate image will be pulled into your fleet:

I’d like to detect the capabilities of a Node and auto configure as much as possible. Think various HATs that enable sensors and or other IO. Does that require multiple fleets per HAT type?

This is an architectural decision for you to make, IMO. If there is enough commonality between different node types, then it may make sense to add support for all the different HW configurations, and then attempt to dynamically use the attached HW on a specific node. Again, you can see something akin to this in our sensor block linked above: it tries to detect what sensors are attached to the device, and load them dynamically.
If there is a wide difference between HW configurations, or it’s not possible to dynamically load the correct configuration, then this would probably necessitate a separate fleet for each codebase.

Hope this helps,
Phil

Phil,

Much appreciated. I have some tiny beginnings running on two fleets that share the same codebase for now. Using a dynamic framework I can load a lot of extensions dynamically at runtime.
Only when needing extra containers I might have to truly separate controller nodes from regular ones.

Is there a way to decide which containers to startup dynamically at the node level? eg if a controller starts up and cannot find an MQTT broker on the local VLAN it would decide to start and register its own instance - preferably as a dedicated container on its node.

Same for a controller node that detects lots of local fast storage: it could spin up an influxdb container and register it for the rest of the fleet.

Even looking at migrating different microservices after the fact to different nodes as they become available.

Thanks!

Oliver