Data seems to be preserved between container instances

We’ve noticed that data we wouldn’t expect to persist remains on our Raspberry Pi devices remains after reboot and power cycle. Here’s some sample output when deploying a new image:

19.01.18 18:44:28 (-0600) Killing application 'registry2.resin.io/rosiedev/d7b5a42a5b68f85e41df72183c06ad353034651e'
19.01.18 18:44:39 (-0600) Killed application 'registry2.resin.io/rosiedev/d7b5a42a5b68f85e41df72183c06ad353034651e'
19.01.18 18:44:39 (-0600) Installing application 'registry2.resin.io/rosiedev/7c96704d27db621d9a340e858a64fffe4f203cd4'
19.01.18 18:44:47 (-0600) Installed application 'registry2.resin.io/rosiedev/7c96704d27db621d9a340e858a64fffe4f203cd4'
19.01.18 18:44:47 (-0600) Starting application 'registry2.resin.io/rosiedev/7c96704d27db621d9a340e858a64fffe4f203cd4'
19.01.18 18:44:49 (-0600) Started application 'registry2.resin.io/rosiedev/7c96704d27db621d9a340e858a64fffe4f203cd4'
19.01.18 18:44:49 (-0600) Starting script
19.01.18 18:44:51 (-0600) This is what is currently installed inside the docker container:
19.01.18 18:44:51 (-0600) framework-arduinosam

Then, following a shutdown and minute-long power cycle of the device, we see this:

19.01.18 18:48:24 (-0600) Killed application 'registry2.resin.io/rosiedev/7c96704d27db621d9a340e858a64fffe4f203cd4'
19.01.18 18:48:24 (-0600) Shutting down
19.01.18 18:49:00 (-0600) Applying config variable RESIN_SUPERVISOR_LOCAL_MODE = 0
19.01.18 18:49:00 (-0600) Applied config variable RESIN_SUPERVISOR_LOCAL_MODE = 0
19.01.18 18:49:00 (-0600) Starting application 'registry2.resin.io/rosiedev/7c96704d27db621d9a340e858a64fffe4f203cd4'
19.01.18 18:49:00 (-0600) Application is already running 'registry2.resin.io/rosiedev/7c96704d27db621d9a340e858a64fffe4f203cd4'
19.01.18 18:48:50 (-0600) Starting script
19.01.18 18:48:53 (-0600) This is what is currently installed inside the docker container:
19.01.18 18:48:53 (-0600) framework-arduinosam tool-bossac tool-scons toolchain-gccarmnoneeabi

Note the last two lines in both cases. We’re outputting these as part of the bash script we call in CMD at the end of our Dockerfile:

#!/bin/bash
echo "Starting script"

set -e

echo "This is what is currently installed inside the docker container:"
echo $(ls platformio/packages/)
# After this, we perform an install of the packages in the platformio/packages directory

So, given that these dependencies don’t live on our /data partition, we’re trying to figure out why they might persist across power cycles. They do not persist across image deployments. What might be causing this? Is there any information we could provide to better explain this problem?

Hello,
I’m not sure if you are listing the contents of the persistent storage in your examples above.
In general, if you want specific data or configurations to persist on the device through the update process, you will need to store them in /data . This is a special folder on the device file system which is essentially a Docker data VOLUME. This folder is maintained across updates. This folder is not mounted when your project is building on our build server, so you can’t access it from your Dockerfile. The /data volume only exists when the container is running on the deployed devices.
Is there a chance you are viewing the contents of your application container after the building has finished and thus you see the installed packages? Are there specific files that your application produces and persist across new containers?

Hi Ilias!

I should’ve been a bit more clear - I’m intentionally listing the contents of my root directory (specifically /platformio/packages/) once the container is running on the device, but before I install packages that can only be installed at container runtime. So, I don’t expect the files to persist (because, exactly like you said, files should only persist in /data), but the files do seem to persist.

I’m sure it’s something I’m missing, I’m just not sure what that is. Is there a way for files to persist across new containers outside of the data volume?

Just to give a more concrete example of what we’re talking about:

The device is powered off. I power on the device, then connect via resin’s ssh console and run mpv, an application that wasn’t installed as part of the dockerfile. As expected, the result is bash: mpv: command not found. So, I then run apt-get update && apt-get install -y mpv to install it. mpv was installed to /usr/bin/mpv, which is not a part of the /data volume.

I power cycle the device, ssh back into it via the resin console and run mpv successfully, meaning that it is still installed, even though I performed the install in the running container.

Not entirely sure what’s going on here! Any insight would be appreciated :grinning:

Hello,

so yes, having some packages installed or new files in the application container, you will see that they will persist on reboots. However they will not persist in new application containers, i.e. when you download a new version of your application. This is because on reboot, the supervisor does not need to download again the application container, and thus it is using the one you already have on the device.

Ah, I see that mentioned in the FAQ here. Is there a way to force container recreation on restart or power cycle?

I’d suggest to try to interact with the resin supervisor and ask for a restart of the application container.
You can check out the supervisor api here: https://github.com/resin-io/resin-supervisor/blob/master/docs/API.md#post-v1restart

Let us know if that works for you.
Best,
ilias

Thanks for the help :slightly_smiling_face: