Development workflows - mount a folder from your device to your local system for development and testing

Hi balena family,

There has been lots of discussion going on around different development and testing workflows. It is a difficult one to fix with only one solution because there are so many different ways to develop, and languages to develop in.

Based on some of the different conversations going on around the forums, I thought I would push an NFS-Server option to the Hub, that allows mounting a folder from your device to your local computer (your Mac for example). It allows you to edit on-device code as if it were on your local computer. Then you simply run the command on the device to test it in the balena device environment without having to keep pushing or rebuilding the container.

It’s certainly not a solution for all the different workflows, but I have been using it the last week when working in Python and it has proved quite useful. It isn’t an officially supported solution, just another community approach to discuss and try to understand how the different development workflows can evolve.

It would be great to hear ideas on other workflows to make development easier, both related to this NFS approach and other ideas too.

Shout out to Volkov Labs who found this effective method of running NFS servers and clients on device and put together the solution for sharing mounts between containers: balenaHub: an easier way to find and publish fleets, apps, and blocks for edge devices

4 Likes

Well done @maggie0002! NFS approach works really well and I would recommend to use it for development, testing and sharing files between containers.

1 Like

Great to know @Mikhail, just that one line message helps to understand if there is any practical use for others and a reminder/boost to keep it on my radar and try to expand the approach where we can.

Maybe I should have a go at a rough mapping of scenarios. If people can develop effectively off of a device, I suspect they would as it will be more responsive. I know I would.

Leaving scenarios where we have to develop on device, which are:

(a) when working with peripheral hardware (sensors etc.);
(b) when working on things that interact with the core hardware (i.e. Wifi; balena Supervisor).
(c) running tests

Then potential development flows:

  • Compiled languages such as UIs that require Webpack or other services to serve a development environment on a port with hot-reload
  • Languages like Python when there isn’t a service that needs to be running but the running Python script needs to be stopped and started again for the changes to be made
  • Scenarios where the container may need to be restarted, I can’t think of any though, it’s probably quite a niche one.

In some ways the NFS idea can help with all of those, you could put the UI code in your local NFS folder then go to the device and start webpack (although the amount of files in the node_modules folder may be a little sluggish to copy across). Python also an improvement in being able to quickly change code and then manually stop the service and restart it. For some reason though it still feels like only half a solution.

If anyone thinks there are scenarios outside of those listed then let me know, it will help in thinking it through. And if any ideas on how to make it more fluid then of course would be great to hear them too.

Not forgetting of course the in-built development flow of the CLI which facilitates development to a device to an extent but by rebuilding the container and restarting the container and the hot-reload service for a change to a file.

1 Like

Hypothetical workflow using a CLI:

  • Move the server to the local system (i.e. your Mac). Wrap it in something I had worked on recently for another project which scans for devices on the network (GitHub - balena-community/go-cli: balena CLI in Golang). The server could all be contained in the Go CLI so no need for dependencies like FUSE, and cross platform compatible (windows, Mac etc) (at least that is what I initially read on a Go NFS build I found).
  • The go-cli above already has the logic in it for then connecting to devices found via the docker socket, so could connect to the device and pull and run an NFS client that then connects to the server running on your Mac.
  • Once the NFS is connected, the CLI starts a terminal session linked to the container on the device that just started

That could result in your local GitHub cloned repo being visible on the device and you being able to run content on device that you are manipulating completely local, and all from a CLI. We could also include some flags on how that container should be started, in terms of which labels/permissions it should have to be able to replicate the environment you want (–privileged; mount kernel modules; etc. etc. ).

I wonder a little how performance may be affected though, if it has to pull all the files each time to read them when running things like node_modules folders, as opposed to when the device runs the server and it all has the code on device already.

Interesting food for thought and ideas, unknowns aside and considering the components already built that wouldn’t actually be too big a lift.

Update: performance way to slow to have the server local and client on device. Back to the drawing board.

Ok, last stab at an idea for now.

How about a dev quick start. So through a CLI you scan for dev devices on the network, then run something like cli -quickstart devcename.local alpine:latest and it pulls the specified image to your device (with mounts to kernel etc. as specified through a flag) starts the NFS server inside and mounts it locally to your system and opens the terminal to the root of that container.

It would still be too much to copy large amounts of content but great for testing python and other small scripts. I imagine my workflow would then be to git clone my project on the device, which would then be visible to me locally on the mounted NFS folder and work from that.

It goes a little way down to improving UX of the NFS mount option, but not so far that it becomes prescriptive. Perhaps new ideas could emerge from that.

I will leave the idea out here to simmer for a while and if there is interest from people then could add it to the list.

Hi Maggie,

Great post as always :slight_smile:

Just chiming in with the workflow I’m using.
My projects are generally done in Java and usually consist of using multiple peripherals and connecting to a server to obtain configuration and send measurements.

In general, I try to (unit) test my code locally before going through the hoops to run it on the target hardware. This works to some degree, but completely misses all the timing issues you get from running a bunch of different peripherals at the same time.

At the moment when I want to debug something that’s in development, I will run my container with the balena-idle command and then either push a new jar through live-push, or pull it via sftp.
I’ve added an environment variable to launch it with debug parameters: ENV JAVA_DEBUG="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=0.0.0.0:1234".

Where I usually just run java -jar ${app_jarfile}, I use java ${JAVA_DEBUG} -jar ${app_jarfile} for debugging, and then connect to the process with the remote java debugger in Eclipse.
This requires me to configure the IP address of the device in the debug/launch configuration manually.

In general, my debugging cycle requires these steps:

  1. Find the IP of the device to use;
  2. Configure the device not to run the production executable, but be ready for debugging;
  3. Deploy the newly built executable (~20MB for my larger project);
  4. Run the executable with debug parameters;
  5. Configure and launch the debugger;

Steps 1 and 3 could be automated by simply changing the Dockerfile to call the executable with the debug parameters, instead of waiting for me to do it manually; though I kinda like the device to be idle when not actively debugging, this also prevents annoying restart cycles when something gets messed up.

For step 2, the live push seems to work okay most of the time; though sometimes it breaks, NFS could help.

I’m fairly certain that step 4 can be done more easily by setting up local port forwarding to the device in question. This would effectively move the re-configuration part outside of the IDE.

Now on to your suggestion.
I think that git clone on the device, can be pretty tricky if you’re still working on fine-tuning your Dockerfile and need to rebuild your container/image as it might cause you to pull in from the repo over and over again.
It might be more efficient to clone to a folder on your development machine, and have your device mount that instead; your local network will also probably be faster than your internet connection, causing syncs to be faster.

Hope you can find some use for this post :slight_smile:

1 Like

Super helpful! Getting a collection of different real-world workflow scenarios is much better than having to try and imagine what they could be, there are always scenarios we cannot consider.

There is a lot of food for thought there, I am going to need some time to think it all through. Port forwarding is interesting, I am aware of the Balena CLI tunnel feature but hadn’t used it before.

Perhaps the ability to override the default CMD or ENTRYPOINT when using the Balena CLI push or deploy could be helpful, so when pushing locally it could start idle.

Lots to think about, hoping to hear some other workflows. And going to ping @rahul-thakoor who is working on development workflows and I think will be interested too.

Here is an interesting project which is definitely going to be worth a look: https://mutagen.io. Would be interesting to hear what people think if you get a chance to try it out.