Using an M.2 SSD on a Jetson Xavier NX eMMC


We are currently using balenaOS on a Jetson Xavier NX for a couple of robots in the field. Everything works perfectly well with the Developer Kit including SD card.

However, with the chip shortage going on there seems to be no stock left of Xavier NX Developer Kits - which means we have had to switch to the eMMC production module and a third-party carrier board. Here’s where the issue comes in: the eMMC only has a capacity of 16 GB, but our running image - which includes ROS, CUDA, etc - is around 9GB. This means we can’t really do any updates on the device without running out of space.

This is why we opted to get an NVMe SSD that we attach to our carrier board’s M.2 slot, so we’d have enough room to store our data. The end goal here is to have the cached images, in the /var/lib/docker/overlay2 folder, sit on the SSD - thus leaving only the active container(s) on the eMMC storage.
Now I’ve seen that it’s impossible to boot out of anything other than the eMMC, so the direct option seems out of the window. After that, here’s what I tried so far:

Mounting the SSD from the Host OS
By remounting the filesystem I was able to mount the SSD in the Host OS:

mount -o remount,rw /
parted /dev/nvme0n1 mklabel gpt
parted -a opt /dev/nvme0n1 mkpart primary ext4 0% 100%
mkfs.ext4 -L nvme /dev/nvme0n1
mkdir -p /mnt/nvme && mount -o defaults /dev/nvme0n1 /mnt/nvme

To make the changes persist through reboots, I then tried adding the following line to fstab:

/dev/nvme0n1         /mnt/nvme            ext4       defaults,sync  0  0

Doing a live reload of fstab using mount -a makes the drive show up, but after a reboot the entire device does not show up on balenaCloud. Am I doing something wrong here?

Changing the resinOS-flash file
Following the approach in Method for Mounting External Storage to Multiple Containers does not seem to work either - as soon as I make changes to the resinOS-flash.xml file (step 3 in the post) and use jetson-flash to flash the eMMC, the device again does not come online in balenaCloud.
The author said that it looks like a ‘more official solution is forthcoming’, and has suggested perhaps using a volume named resin-data-override to automatically add an external drive as a partition for the entire device. Is there any news on this?

Mounting the SSD in a container
Looking at Adding An NVMe Drive and Postgres Database Persistent Storage on NVMe SSD instead of SD Card, I managed to get the SSD mounted automatically in a bare-bones default container, using the script and Dockerfile posted there (thanks, @ts-cfield !):

Initialization shell script

#!/usr/bin/env bash

su - -c "mkdir -p /mnt/nvme" root
device=$(blkid | grep "LABEL=\"nvme\"" | cut -d : -f 1)
echo "Mounting device = ${device}"
su - -c "mount -t ext4 -o rw ${device} /mnt/nvme" root

# May want to call the parent image entrypoint instead
# exec "$@"
exec "$@"


FROM balenalib/jetson-xavier-nx-devkit-emmc

# Include the script in the image

# Needed to make the script executable
RUN chmod +x /

# The name of the initialization shell script can be anything

# Change to the command used by the parent image
CMD ["sleep", "infinity"] 

Then, mounting the same volume in the Host OS gives me access to files put on there in the other running service. Nice. Next, I try to make a symlink between the stored images’ folder and the newly mounted drive:

root@ded122f:/var/lib/docker# ln -s /nvme overlay2

This works - /var/lib/docker/overlay2 now also gives me access to the test files I put into the SSD while ssh’d into the running container. After a powercycle, the symlink remains, but the drive itself is not mounted in the Host OS anymore.

I’m kind of at a loss of what to do next: I’d like to get the Host OS’ /var/lib/docker/overlay2 folder to somehow be symlinked to the SSD, but that seems impossible without persistent configuration of the drive on the Host OS - which seems impossible as well.

The author seems to agree, though their use case is slightly different:
“I had another idea to move the /var/lib/docker/volumes folder to the NVMe and then all Named Volumes would be stored on the larger, more stable (non-SD card) “external” drive. Moving the /var/lib/docker/volumes folder appears to be very involved and possibly harmful to the OS. Using symbolic links can also cause errors. While this would make persisting storage on an external drive relatively straight-forward, it is basically a non-starter.”

So, my question is: is there a way to automatically and persistently get the NVMe drive mounted on the Host OS, such that I can move the cached images from the Host OS to this drive automatically?

Thank you,

Hey @PeterG did you end up solving this?

No success in using the SSD yet sadly, maybe there’s an update from the balena team?
In the meantime, we’ve put our efforts into reducing the image size using multi-stage builds, which seems to alleviate the problem somewhat.

I’m also investigating using the Aetina AIE-CN11 Jetson Xavier NX as deployment hardware. I’m able to successfully flash BalenaOS to the eMMC but need to utilise the NVMe for our larger docker images.

Jetpack 4.6.1 is supposed to support booting from the NVMe SSD now via boot order settings within CBoot?

What are the latest options in terms of flashing the eMMC and the copying the rootfs over to the NVMe SSD and booting directly from there?

Hey everyone, it sounds like there are a few use cases at play here and I want to make sure we keep them separate:

1. Install and boot balenaOS from SSD

This is not well documented but can be done with devices like RPi4. It might also be possible with the Jetson devices you mentioned, and the forum post here may help:
Note that if you are using a “flasher” type image meant to write itself to eMMC when booted, you’ll want to unwrap the image first: GitHub - balena-os/balena-image-flasher-unwrap: Tool for unwrapping balena-image from a balena-image-flasher

2. Install cached layers on SSD, all other Docker data on eMMC

This is not currently possible as we use a single Docker data directory. You might be able to accomplish this with mount scripts and symlinks but it would likely break when you try to update the hostOS. I do not recommend trying this.

3. Images and containers on eMMC, application data on SSD

This is the most common use case and is best accomplished by running your service container as privileged and running a mount command before starting your application.

For the first option above I will ping some folks internally to see if they have any experience with this.


I am also looking for a solution for the 3rd approach i.e. Images and containers on eMMC, application data on SSD. Is there any documentation or solution available for that.


Hey @1297rohit

The balena docs briefly mentions how to mount external storage media in a container:

There are also many examples in the balena forums and on Github. Here’s a small snippet from my Nextcloud project but I encourage you to research what each of these commands are doing before implementing them yourself:

I hope this helps!

I can’t get access to the link to look at what you were referring to. I have a need to boot Balena from an SSD on a TX2 NX. Is there a different link?

@klutchell I tried to clone your repo in my balena cloud fleet to see if using that i am able to access the ssd from inside the docker image but i was not able to access it.
I am attaching the output from the nextcloud docker image from the balena dashboard

I am also attaching the output of running lsblk from the nextcloud docker image
bash-5.1# lsblk
mmcblk0 179:0 0 14.7G 0 disk
├─mmcblk0p1 179:1 0 128K 0 part
├─mmcblk0p2 179:2 0 448K 0 part
├─mmcblk0p3 179:3 0 576K 0 part
├─mmcblk0p4 179:4 0 64K 0 part
├─mmcblk0p5 179:5 0 192K 0 part
├─mmcblk0p6 179:6 0 576K 0 part
├─mmcblk0p7 179:7 0 64K 0 part
├─mmcblk0p8 179:8 0 768K 0 part
├─mmcblk0p9 179:9 0 448K 0 part
├─mmcblk0p10 179:10 0 128K 0 part
├─mmcblk0p11 179:11 0 44.7M 0 part
├─mmcblk0p12 179:12 0 80M 0 part
├─mmcblk0p13 179:13 0 476M 0 part
├─mmcblk0p14 179:14 0 476M 0 part
├─mmcblk0p15 179:15 0 20M 0 part
└─mmcblk0p16 179:16 0 13.6G 0 part /var/www/html
mmcblk0boot0 179:32 0 4M 1 disk
mmcblk0boot1 179:64 0 4M 1 disk
zram0 252:0 0 1.9G 0 disk [SWAP]
nvme0n1 259:0 0 119.2G 0 disk
└─nvme0n1p1 259:1 0 119.2G 0 part

Here you can see that the there is nvme0n1 disk which is attached to the nvidia jetson device but that i am not able to use that disk to store data from inside the docker conatiner.

Please help with this issue.

@1297rohit It’s because this snippet of code is filtering for USB storage devices only, it would need to be adjusted to detect the subsystem of your SSD.

The reason for the filter is so we don’t accidentally mount the root storage device, instead we only want to detect “extra” or “external” storage media.

If you run something like this on your host we should be able to identify the subsystems of your block device.

lsblk -J -O | jq -r '.blockdevices[] | .name, .subsystems'

Then you can adjust the filter accordingly.

There also seems to be an opportunity here to list all block devices and specifically remove the one we determine to be the root block device from this list. This would be a nice improvement to the project but I don’t know when I will have time to look into it.

The change I made in this PR might be closer to what you need

but in your case you could also just hardcode /dev/nvme0n1p1 or the UUID and skip all the searching and looping bits. You know the device path, just run the mount command in the container init script as suggested in the docs

mkdir -p /mnt/ssd
mount -o rw /dev/nvme0n1p1 /mnt/ssd

@klutchell Thanks for the help with these commands.
After adding these commands in my project i am getting error in mounting
0 0.222 mount: permission denied (are you root?)

Step 10/15 : RUN /bin/sh -c
—> Running in 0959fa7e86d0
/bin/sh: not found
Removing intermediate container 0959fa7e86d0
The command ‘/bin/sh -c /bin/sh -c’ returned a non-zero code: 127 looks like this

mkdir -p /mnt/ssd
mount -o rw /dev/nvme0n1p1 /mnt/ssd

Please help me with resolving this error. docker-compose already has privileged: true


@1297rohit It looks like you are trying to execute your mount script as a build step, when it should be done at runtime when the application starts.

In your Dockerfile you’ll need something like this:

# copy the mount script into the build context
# make sure it is executable
RUN chmod +x /
# run the mount script on container start
CMD [ "/" ]

Then at the end of your mount script you can use exec to start whatever your primary application is

mkdir -p /mnt/ssd
mount -o rw /dev/nvme0n1p1 /mnt/ssd
exec /path/to/my/app

Also note that your container will need to run with privileged: true if you are using docker-compose.

We have a small doc on writing Dockerfiles, but the official Docker guides could be useful as well!

I tried using configizer and adding the following rule to the

UDEVRULES[4-mount_mini_drive]='ACTION==\"add|change|remove\", RUN+=\"/bin/sh -c '/resin-data/'\"\n'

But that didn’t work out.

My goal is for the host OS to mount a drive (/dev/sda, ext4) with a label mini in /var/lib/docker/volumes/mini and use that as a volume to mount a container folder.

In the docker-compose.yml would have something like

    container_name: zeta
    image: balena/zeta
    build: services/zeta
    cpu_quota: 500000
      io.balena.features.balena-socket: "1"
      io.balena.features.kernel-modules: "1"
      io.balena.features.supervisor-api: "1"
      io.balena.update.strategy: delete-then-download
    restart: unless-stopped
    network_mode: host
    privileged: true
      - mini:/var/lib/zeta
      - resin-data:/data

Should the udev rules have worked? What am I missing here? Any advice is appreciated.

Hi @yuriploc,

I’ve never heard of the UDEVRULES variable doing anything and couldn’t find any mention of it in the documentation.
Moreover, regarding the rule itself, I’m not sure it is well-formed (just ACTION without any SUBSYSTEM or KERNEL might just never be triggered).

There are some caveats with using UDEV in Balena:

  • It should run in a priviledged container, with UDEV environment variable set to 1,
  • The container should be based on a balenalib image,
  • The rule should be written in a .rules file in the containers /etc/udev/rules.d/ folder.

You can find some working examples here.

This should be enough if you only need to access your SSD data from a single container.

For sharing the data between multiple containers, the recommended approach is to use a dedicated container to mount the SSD drive and then shares it to other containers through NFS (network share, but only accessible between the containers as you won’t expose the port to the outside world).

This approach is explained here and we already have a pre-existing balenaBlock that you can use directly, or fork to adapt to your needs.

All your containers needing to access the data will just have to add mount -t nfs SERVER_CONTAINER_NAME:/nfs /FOLDER/WHERE/YOU/WILL/ACCESS/YOUR/DATA to your containers startup and it should work.

Let us know if it helps.