All new devices are installed with the “No_Config” application.
When ready a device is moved to the ‘Calibration’ application. The purpose of this is to setup device specific data which should be persisted for the entire lifespan of the device. This includes Unique ID, calibration data and other config data.
When calibrated and ready, then swap to ‘Nice_Software’. Boom… persistent data is no more.
Question:
Is it possible to avoid automatic purge of data when swapping from one application to another?
“Warning: For devices running resinOS version 2.12.0 and above, data in persistent storage (named volumes) is automatically purged when a device is moved to a new application.”
Can this automatic purge be avoided?
I have not found any environment variables to control the automatic purge features.
Solution:
If no solution exist, then it would be ideal to add an automatic purge option on the application. Which would apply for all devices in the given application.
For the above use case the automatic purge would only be enabled on the “No_Config”. All other applications would not automatic purge data.
This could also be extended to allow device override of the automatic purge option. Meaning that a device inherit the automatic purge option from the application, but can be overwritten.
At the moment the automatic purge behaviour is by design, and cannot be overridden by configuration variables.
One workaround that might work in your case is, instead of moving between applications, use staged releases to move a device across different versions of the same application. Of course, that assumes that ‘No_Config’, ‘Calibration’, ‘Nice_software’, and ‘Even_Nicer_Software’ are different releases of the same application.
Also, would it make sense to use a single application with multiple containers that will handle the separate preparatory steps of the main application, instead of moving a device through separate apps?
Note that you can define ‘depends_on’ relationships in your multicontainer application, which might be helpful in this use case.
Taking a moment to answer the general “where to report issues” question first, these forums are actively monitored as an input into our development pipeline. It’s slightly out of date but there’s a great blog post https://resin.io/blog/support-driven-development/ that goes into a little more detail.
With my apologies for chucking another idea onto this thread, and giving the impression that resin.io is more than a single consciousness, what about simply executing several files? ie have a start.sh that runs config.whatever then application.whatever. config.whatever could just skip merrily past if everything is configured. However if the device requires configuration (either because the configuration env var is absent, or perhaps a reset jumper is set) then it goes through the configuration and stores the result of that in a device variable via the api.
This would allow you to change your configuration script without complicated git fiddling and view the configuration via the web ui.
I have managed to setup a solution where I use staged releases. This works and allow me to implement an use case similar to the described.
Note: When disabling rolling update, then a minor problem exist which is to keep track of releases and devices. This issue originate from the problem that releases for an application becomes one big pile of releases as one-to-many version of the application can/will be available at the same time. Hence, one must cherry-pick specific release from the pile to ensure proper software on one-to-many devices.
Cherry-pick is needed to ensure customer devices is not updated. Customer want a specific version.
The proposed use case does not have this issue as you can have one-to-many applications which all can perform rolling update.
@aliasbits Another solution, although it may seem crazy, is to have a single master branch, and run time variables which enable features. This means that new features and behaviours (fixes) have start disabled and be enabled on specific devices for testing, validation, and finally enablement across the larger fleet rather by removing the feature flag, or enabling it in code.