Request error in cli and dashboard

I am receiving the following error message when executing commands using resin-cli and the resin.io dashboard. I first noticed the message via the cli.

ResinRequestError: Request error: It is necessary that each device that should be running a build, should be running a build that belongs to an application that the device belongs to.

I provisioned a few new devices last night (Sydney, Australia time) and they have the “Factory Build” on them (as expected). I wasn’t expecting this to affect existing devices/apps though. I tried to delete these new devices, but I receive the error message above.

I have also noticed the dashboard says a few of my devices are online after having them powered down for 15min.

UPDATE:

The problem seems to have resolved itself. I do not get the error message above in the CLI or dashboard anymore. However, I pushed a commit to my resin app’s repo, it built fine, but the device log in the application is showing:

Failed to kill application 'registry2.resin.io/.../...' due to '[object Object]

Followed by:

Failed to update application 'registry2.resin.io/.../...' due to'[object Object]

(I replaced my app and device id in the errors with ‘…’)

I tried to reboot the device from the dashboard, but it gives me an error, Request error: [object Object]

The timings on this are consistent with an incident we experienced recently (http://status.resin.io/incidents/nlnssh6mqgf5). Could you try this again and let us know?


resin.io

A physical reboot of the device still had the same issue. I re-imaged the device and it is working as expected.

Is there a fix to the issue to prevent cloud services from hosing a device? I am worried about recovering deployed devices that I do not have physical access to.

Hi @jeff, sorry about the issues. As long as the device is connected to the network we can recover and fix pretty much anything (so your device won’t be “hosed”). In this case should have asked you for the device link earlier and investigate what was going on in there.

The original issue was fixed, in this case, it might have been that the server side (our database) had outdated info on the device, and thus the supervisor running on the device got the wrong information (fallout from the incident). If this is the case, the device is definitely not hosed.

Just mentioning all this for context, and that if there’s any other issue in the future (hopefully not!) then our team can be at your service much more than happened this time. It’s surprisingly hard to hose a resinOS device in practice. The infrastructure used is not 100% straightforward, though, so we need better explanation how things fit and work together (e.g. here that the issue was likely on the database side of things).

Makes sense. I re-imaged to get back to development quickly.

If I come across this type of thing again I’ll try to leave the app/device as-is as long as possible so it can be debugged.