I’ve been working to containerise OpenThread Border Router (OTBR) for a project we are working on here at Dynamic Devices.
- interested in OpenThread over 802.15.4
- using the Nordic nrf52 series of devices at the moment
- we’re also looking into other devices including silicon from NXP but haven’t got those going
We’ve found that the best way to get started is with the Nordic nrf52840-DK as this has a Segger debugger onboard so you can program it using the Nordic tools.
We are also using the nrf52840 Dongle as this is cheaper than the DK board (£10-£12 versus £40-£50) and we need numbers of them to test out mesh performance.
There are a range of SDKs available from Nordic and we believe that the current and correct SDK to use is nRF Connect SDK which plugs into VSCode.
You can find more details here:
There are some standard OpenThread examples which should (I believe) be supported whichever target silicon you are using.
There is CLI example which is a command line interface shell you can drive via serial comms to exercise the mesh network. You can use this with the DK “out of the box” or you can change some compile flags to enable USB CDC serial and use it on one of the dongles.
The CLI is used by some Nordic tools including the Topology Viewer
NOTE: There is of course a gotcha. The version of the CLI for the Topology monitor is not the same as the version you can build from the Nordic examples. For the Topology monitor you need to use the hex file that is downloaded within the topology monitor installation tree. The files are on a relative path
nRF_TTM-linux-x64/hex and you’ll need the right one for the specific silicon. The difference seems to be in the CLI for T.M. you use the commands directly with a > shell whereas the CLI example you built has other Zephyr RTOS commands and you use the OpenThread CLI commands with an “ot” prefix and a uart$ shell
There are also other tools available in the nrfConnect for Desktop application suite including the Programmer application you will need to program the dongle.
There’s also an 802.15.4 sniffer available
nRF-Sniffer-for-802.15.4 and similarly you probably want to use the hex file that is downloaded with this as appropriate for your silicon. You can then connect this up to WireShark as documented here.
There is also an RCP which is a Radio Co-Processor example. This implements some other kind of API to allow host uC applications to talk to it to get acess to the Thread stack and the underlying 802.15.4 radio network.
NOTE: You need the RCP example rather than the CLI running on a dongle for the OpenThread Border Router to work. Although there are some notes in the docs about simulating an RCP if you don’t have hardware
This is all a bit messy as I am figuring things out as I go along but I have a base block which is based on the standard OpenThread docker image and extends it a little for some bits we need
This is used by an application I have created which is here
Couple of things
I am not convinced that all
docker-compose.ymlsettings are propogated from the block through to the application configuration. This needs further work
The OpenThread code starts up an mDNS service which may well conflict with the Balena mDNS service. I don’t believe we need this for 802.15.4 work as I think mDNS is for WiFi only but this also needs further investigation
The OpenThread Border Router requires to talk to the RCP device which enumerates with my Nordic part as
/dev/ttyACM0. If you need something else, like
/dev/ttyUSB0then you need to change the relevant application environment variable in the
The OpenThread Border Router should be routing across
wlan0in this configuration. Again if this needs to change to e.g.
eth0then you can change this in the environment section of the application
The container is privileged for now and this needs restricting with CAPS or some such.
With all of the above I can run up the service and I get the OpenThread Border Router WiFi server running up on port 80 which I can access through the Balena public URL.
I can FORM a mesh network, which means it’s talking correctly to the RCP dongle (which is confirmed in the logging)
I can then run up a separate CLI dongle and JOIN the network I have formed.
I can then PING an IPv4 address e.g.
22.214.171.124 but this isn’t yet working. The packets are dropped. I have seperately tested both a a docker laptop setup and a base install of OTBR on a Raspberry Pi and I know that when it is correctly setup I do get responses to pings.
So I think my IPv6/IPv4 routing is setup incorrectly.
When I add in network stanzas which I found online the Balena service keeps resetting so I am trying to work out what’s going on here
services: openthread_border_router: .... networks: ipv6net: ipv6_address: 2001:3984:3989::20 networks: ipv6net: driver: bridge enable_ipv6: true ipam: driver: default config: - subnet: 2001:3984:3989::/64 gateway: 2001:3984:3989::1
All help very much appreciated !!!