Docker images are layered and reference one another hierarchically, such that an image can be a child of another image (this is the image specified in the “FROM” directive of the Dockerfile); see the documentation glossary pages on “parent image”, “image”  . The parent image of your main container will probably be some kind of “base image” from our docker repositories .
There is also a container running on your device called the “supervisor” which is responsible for interacting with the platform, and this also may have parent images .
This all said, could you provide a link to the dashboard for the device you’ve granted access to? We can then inspect what might be going on.
@troyvsvs, for the purpose of sharing your device UUID or device web URL, I’ve sent you an e-mail to the address registered in your forum account. You may reply to that e-mail instead of replying here. Note that if the UUID of your device is shared publicly, it allows anyone to visit the device’s public URL (if it is enabled). Thanks!
Hi @troyvsvs, thanks for sharing the device UUID with our support team. On investigation, I have found that the device had two balena supervisor images in it, an old one and a new one. Only the new image was running, which is correct, and the error message you were seeing was the supervisor failing to remove its old image. We have recently had an automated background cloud task automatically patching the balena supervisor in the devices, and I believe that his extra supervisor image was a leftover from that task – in every other device the extra image was automatically deleted, but for some reason this did not happen in this device. I have manually deleted the extra image and I expect you will not be seeing that error message again. I’ll share my findings with some team members and I hope we will figure out what went wrong. Thanks for reporting it!
We are running devices via GSM and I’ve seen this message on a few of our devices as well. Even though it doesn’t seem like it’s causing any obvious problems for us right now could you share your way to the solution if you could spare the time? Like in a little more technical terms what one could do to solve this via SSHing to the Host OS for example?
Where is the image located that you have to delete? I am assuming it is a file findable by the SHA hash from the message? Should one delete other files aswell (Other layers or something? Sorry not the biggest docker professional yet.) How do I make sure that it is indeed a stale supervisor image and not some different image?
Sadly giving support access is only an option for a few of our devices but not all, so we would like to do this ourselves if possible.
Additionally could you shed some more light on how these cloud task patches are working? Such patches could be a source of unforeseeable changes for us if the supervisor is patched without us knowing. First of all there is the obvious problem of our devices having access to the internet only via GSM, hence bandwidth is more previous than when connected via a flatrated router, but I’m not assuming these are big patches.
Furthermore however we would like to generally avoid patching something that might not be necessarily relevant to us, it could break more than it fixes after all and we are responsible for our devices.
So what exactly is being automatically patched, how often and how important are these changes? If there is even the possibility of breaking something how can we turn this functionality off and instead opt-in to an update when we have identified that we need it? Where is the difference to the bigger balenaOS updates?
Thanks for your time and thanks for solving issues so quickly!
Hi @Tschebbischeff, I understand your concern and the need for clarification. I have asked our fleet operations team to address the matter and we will get back to you soon. As for the commands used to locate and delete the old/unused balena supervisor image on a host OS terminal, they were:
balena-engine images - lists the images balena-engine ps - lists running containers balena-engine rmi <imageName:Tag> - deletes the specified image
I had observed that there were two images named ‘balena/armv7hf-supervisor’ (with different tags), but only one balena supervisor container running off one of the images. The fix was then to delete the unused balena supervisor image.
This has not worked for me, because the “image has dependent child images”, so it cannot be deleted, even with force. Maybe the Supervisor has to be stopped, so that the image can be downloaded.
What actually worked is just to upgrade to a newer release, than remove the image as suggested.
That message shouldn’t cause any issues if you are using supervisor 7.0.0 or newer. But you can remove the offending image as described in the example above, or this below is a more generic version that works across all the OS versions and cleans up the image:
The image is located inside the device’s balena Engine. No finding needed, the command line options detailed above should be doing what’s needed. This is the only issue that we know of that can cause the error message you mentioned, and that’s why we think this is the solution.
Let us know if you run into any issues, and we can help further then.
Yes was a tiny patch, a small script run on the device, that needed to do some changes to the supervisor currently on the device.
These changes were in part of the supervisor that was needed for our backend operations. It doesn’t affect user facing behaviour, but are relevant to the behaviour of the platform overall.
It was a special case and we are not running these events on a regular basis. The changes for important from the platform’s stability, where the supervisors can report when they encounter issues with their own operations (for example supervisors excessively restarting, or cannot operate as designed for other issues, that we’ve sometimes seen). These are information relevant solely to the supervisor, and allows the performance of preventive maintenance, such as notifying the relevant user that their device is misbehaving even when they haven’t realized it yet, or catch issues with a released supervisor version that would alow us to relase properly fixed new supervisors based on those issues captured.
We apologize for the inconvenience caused. We are very mindful of lot of our users being on very slow, or very expensive networks, and we take that into account when issuing such tasks as these operational patches this time. They are not run automatically at the moment, but our Fleet Operatiosn team takes care of issues with an eye of the devices across the platform whenever it is absolutely positively needed.
Hope this helps to explain a few more things, and please let us know if you have any further questions! Thanks a lot!