How to rsync over Host wlan from container

I have this local wlan0 ip address that I would like to rsync data from:

3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel qlen 1000
    link/ether b8:27:eb:b6:29:fb brd ff:ff:ff:ff:ff:ff
    inet 10.42.0.1/24 brd 10.42.0.255 scope global wlan0
       valid_lft forever preferred_lft forever
    inet6 fe80::f252:6354:809c:e249/64 scope link 
       valid_lft forever preferred_lft forever

I have a container running on that same host that I would like to run the rsync command on. How do I give my container access to that ip address?

Inside the container, ip a only shows:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
153: eth0@if154: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

I know that overlay networks are not implemented in balena-engine. Is there another way to give my container access to the host network?


Some additional background:

I am trying to get files off of my deployed devices in the field.

My current solution is to store the files in an SSD and then unplug the SSD when I need the files. The weakness is that my most recent batch of infield devices are not easily accessible.

With this system, I can bring a RaspberryPi Wifi Hotspot into the vicinity of one of my deployed devices. This way, the deployed device will automatically connect with my wifi hotspot, and I can get files off of the deployed device with rsync.

Ideally, rsync will run automatically inside of the container running the raspberry pi hotspot when it detects a deployed device that connects with it.

Perhaps there is a better way to get access to files on deployed devices, I would love to hear about alternatives.

This is a very good question, as many automation questions are! :slight_smile:

One way to maybe progress with this, if your container (that does the outside communication) has network_mode: host set up, so it can use all the host networking (and see all the interfaces). The downside is that communications with other containers through their service names (as hostnamess) is not possible then, but it doesn’t seem like your solution needs that setup. You can see more about the host networking in the docker documentation

You can also start up an ssh server in the device, on a custom port, running inside the container that has the files you want to share, and run the rsync through there (do I recall correctly, that rsync just uses ssh connection?)

It might get a bit convoluted like this, if it’s all automatic, but these pieces should take you a bit forward. Or maybe someone has a better idea / simpler solution. :slight_smile:

Hi! Thank you for the response :slight_smile:

I did try to do network_mode: host but it seemed to build indefinitely. Is that expected behavior, or is that a bug?

Edit: including docker-compose.yml because that might help:

version: '2'
volumes:
    dgRun:
services:
    testTransfer:
        hostname: testTransfer
        build:
          context: ./bin
          dockerfile: Dockerfile.template
        expose:
          - "8080"
        network_mode: host
        ports:
          - "80:80"
          - "53242:873"
        volumes:
          - dgRun:/app/data/run:rw
        devices:
          - "/dev/USB0:/dev/USB0"
        restart: "no"
        command: /bin/bash
        privileged: true

EDIT:

Nevermind. It worked no problem this time. Not sure what changed. Thank you!

Hello, this seems like a bug.
Just changing the network_mode to host in the docker-compose.yml file makes the build hang?
Can you share the output of the git push or balena push command?

myuser :: myco/transferDataHotspot/config » git add --all && git commit -m "Try network_mode: host" ; git push -f transferDataHotspot master:master 
[master 91375c6] Try network_mode: host
 1 file changed, 1 insertion(+)
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 8 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 324 bytes | 324.00 KiB/s, done.
Total 3 (delta 2), reused 0 (delta 0)

[Info]          Starting build for transferdatahotspot, user User
[Info]          Dashboard link: https://dashboard.balena-cloud.com/apps/1504045/devices
[Info]          Building on arm01
[Info]          Pulling previous images for caching purposes...
[Success]       Successfully pulled cache images
[testTransfer]  Step 1/2 : FROM balenalib/raspberrypi3:build
[testTransfer]   ---> feec859e20a3
[testTransfer]  Step 2/2 : RUN install_packages rsync
[testTransfer]  Using cache
[testTransfer]   ---> a539c25c4bc6
[testTransfer]  Successfully built a539c25c4bc6
[Info]          Still Working...
[Info]          Still Working...
[Info]          Still Working...
[Info]          Still Working...
[Info]          Still Working...
^C%                  

That seems like a build issue, is it reproducible? We’ll try it out here as well, that’s a very simple step indeed.

If I use the docker-compose.yml from above and extracting your Dockerfile.template from the logs, the build worked for me, I suspect some transient issue, and will check with our builder maintainers, just in case.

No, it doesn’t give me the error anymore. Even when I go back to the exact commit level

Thanks for the feedback, I think it was a temporary issue, will bring it up with our team to look into the builder’s side.

Thanks so much for looking at that. Not sure what caused it, as a rule I should remember to try builds multiple times if they fail. :slight_smile:

Well, we still prefer when builds don’t fail, but yet, retries help! Please let us know whenever you hit some issues, though! I am checking with the team about the builder in the meantime anyways.

1 Like