Newly flashed device stuck at CURRENT RELEASE: Factory build

Hi,

I have just flashed (and re-downloaded and re-flashed) an SD-card. It boots fine, and the device are showing up on my dashboard.

But i can’t get it to download the current build.

It’s just stuck on factory build.

I have tried re-deply a new release with no improvments…

Hi, do you see any errors or oddities in the device logs? Please also check the supervisor logs (open the HostOS terminal and run journalctl -n 100 -u balena-supervisor) and let us know of any errors or oddities there as well. It’s also worth going to the device Diagnostics page and running the device health checks and diagnostics to check those for errors.

@myarmolinsky
Diagnostics and health check passed… The HostOS terminal log shows:

root@186ab60:~# journalctl -n 100 -u balena-supervisor
May 11 10:09:16 186ab60 balena-supervisor[2353]: [warn]    Invalid firewall mode: 0. Reverting to state: off
May 11 10:09:16 186ab60 balena-supervisor[2353]: [info]    Applying firewall mode: off
May 11 10:09:16 186ab60 balena-supervisor[2353]: [success] Firewall mode applied
May 11 10:09:16 186ab60 balena-supervisor[2353]: [debug]   Starting api binder
May 11 10:09:17 186ab60 balena-supervisor[2353]: [debug]   Performing database cleanup for container log timestamps
May 11 10:09:17 186ab60 balena-supervisor[2353]: (node:1) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and us>
May 11 10:09:17 186ab60 balena-supervisor[2353]: (Use `node --trace-deprecation ...` to show where the warning was created)
May 11 10:09:17 186ab60 balena-supervisor[2353]: [info]    API Binder bound to: https://api.balena-cloud.com/v6/
May 11 10:09:17 186ab60 balena-supervisor[2353]: [event]   Event: Supervisor start {}
May 11 10:09:17 186ab60 balena-supervisor[2353]: [info]    Starting API server
May 11 10:09:17 186ab60 balena-supervisor[2353]: [info]    Supervisor API successfully started on port 48484
May 11 10:09:17 186ab60 balena-supervisor[2353]: [debug]   Ensuring device is provisioned
May 11 10:09:18 186ab60 balena-supervisor[2353]: [debug]   Connectivity check enabled: true
May 11 10:09:18 186ab60 balena-supervisor[2353]: [debug]   Starting periodic check for IP addresses
May 11 10:09:18 186ab60 balena-supervisor[2353]: [info]    Reporting initial state, supervisor version and API info
May 11 10:09:18 186ab60 balena-supervisor[2353]: [debug]   VPN status path exists.
May 11 10:09:18 186ab60 balena-supervisor[2353]: [debug]   Starting current state report
May 11 10:09:18 186ab60 balena-supervisor[2353]: [debug]   Starting target state poll
May 11 10:09:18 186ab60 balena-supervisor[2353]: [info]    VPN connection is active.
May 11 10:09:18 186ab60 balena-supervisor[2353]: [info]    Waiting for connectivity...
May 11 10:09:18 186ab60 balena-supervisor[2353]: [debug]   Skipping preloading
May 11 10:09:18 186ab60 balena-supervisor[2353]: [debug]   Spawning journalctl -a --follow -o json _SYSTEMD_UNIT=balena.service
May 11 10:09:18 186ab60 balena-supervisor[2353]: [info]    Applying target state
May 11 10:09:20 186ab60 balena-supervisor[2353]: [info]    Reported current state to the cloud
May 11 10:09:21 186ab60 balena-supervisor[2353]: [warn]    Failed to read docker configuration of network supervisor0: (HTTP code 404>
May 11 10:09:21 186ab60 balena-supervisor[2353]: [debug]   Creating supervisor0 network
May 11 10:09:22 186ab60 balena-supervisor[2353]: [debug]   Finished applying target state
May 11 10:09:22 186ab60 balena-supervisor[2353]: [success] Device state apply success
May 11 10:09:22 186ab60 balena-supervisor[2353]: [info]    Applying target state
May 11 10:09:22 186ab60 balena-supervisor[2353]: [debug]   Finished applying target state
May 11 10:09:22 186ab60 balena-supervisor[2353]: [success] Device state apply success
May 11 10:09:28 186ab60 balena-supervisor[2353]: [info]    Internet Connectivity: OK
May 11 10:13:59 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 33.736 ms
May 11 10:14:21 186ab60 balena-supervisor[2353]: [info]    Reported current state to the cloud
May 11 10:19:00 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 13.395 ms
May 11 10:19:12 186ab60 balena-supervisor[2353]: [debug]   Attempting container log timestamp flush...
May 11 10:19:12 186ab60 balena-supervisor[2353]: [debug]   Container log timestamp flush complete
May 11 10:19:22 186ab60 balena-supervisor[2353]: [info]    Reported current state to the cloud
May 11 10:24:00 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 10.626 ms
May 11 10:29:01 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 14.496 ms
May 11 10:29:12 186ab60 balena-supervisor[2353]: [debug]   Attempting container log timestamp flush...
May 11 10:29:12 186ab60 balena-supervisor[2353]: [debug]   Container log timestamp flush complete
May 11 10:34:02 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 14.030 ms
May 11 10:37:30 186ab60 balena-supervisor[2353]: [info]    Reported current state to the cloud
May 11 10:39:03 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 13.075 ms
May 11 10:39:12 186ab60 balena-supervisor[2353]: [debug]   Attempting container log timestamp flush...
May 11 10:39:12 186ab60 balena-supervisor[2353]: [debug]   Container log timestamp flush complete
May 11 10:42:30 186ab60 balena-supervisor[2353]: [info]    Reported current state to the cloud
May 11 10:44:04 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 10.293 ms
May 11 10:49:05 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 10.501 ms
May 11 10:49:12 186ab60 balena-supervisor[2353]: [debug]   Attempting container log timestamp flush...
May 11 10:49:12 186ab60 balena-supervisor[2353]: [debug]   Container log timestamp flush complete
May 11 10:54:06 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 15.315 ms
May 11 10:59:06 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 20.732 ms
May 11 10:59:07 186ab60 balena-supervisor[2353]: [info]    Reported current state to the cloud
May 11 10:59:12 186ab60 balena-supervisor[2353]: [debug]   Attempting container log timestamp flush...
May 11 10:59:12 186ab60 balena-supervisor[2353]: [debug]   Container log timestamp flush complete
May 11 11:04:07 186ab60 balena-supervisor[2353]: [info]    Reported current state to the cloud
May 11 11:04:07 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 12.119 ms
May 11 11:09:08 186ab60 balena-supervisor[2353]: [info]    Reported current state to the cloud
May 11 11:09:08 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 11.086 ms
May 11 11:09:12 186ab60 balena-supervisor[2353]: [debug]   Attempting container log timestamp flush...
May 11 11:09:12 186ab60 balena-supervisor[2353]: [debug]   Container log timestamp flush complete
May 11 11:14:08 186ab60 balena-supervisor[2353]: [info]    Reported current state to the cloud
May 11 11:14:09 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 8.854 ms
May 11 11:19:10 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 14.032 ms
May 11 11:19:12 186ab60 balena-supervisor[2353]: [debug]   Attempting container log timestamp flush...
May 11 11:19:12 186ab60 balena-supervisor[2353]: [debug]   Container log timestamp flush complete
May 11 11:24:11 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 15.619 ms
May 11 11:27:10 186ab60 balena-supervisor[2353]: [info]    Reported current state to the cloud
May 11 11:29:12 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 13.635 ms
May 11 11:29:12 186ab60 balena-supervisor[2353]: [debug]   Attempting container log timestamp flush...
May 11 11:29:12 186ab60 balena-supervisor[2353]: [debug]   Container log timestamp flush complete
May 11 11:32:10 186ab60 balena-supervisor[2353]: [info]    Reported current state to the cloud
May 11 11:34:13 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 8.245 ms
May 11 11:39:12 186ab60 balena-supervisor[2353]: [debug]   Attempting container log timestamp flush...
May 11 11:39:12 186ab60 balena-supervisor[2353]: [debug]   Container log timestamp flush complete
May 11 11:39:13 186ab60 balena-supervisor[2353]: [info]    Reported current state to the cloud
May 11 11:39:13 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 11.750 ms
May 11 11:44:14 186ab60 balena-supervisor[2353]: [info]    Reported current state to the cloud
May 11 11:44:14 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 8.787 ms
May 11 11:48:10 186ab60 balena-supervisor[2353]: [info]    VPN connection is not active.
May 11 11:48:19 186ab60 balena-supervisor[2353]: [info]    Waiting for connectivity...
May 11 11:49:12 186ab60 balena-supervisor[2353]: [debug]   Attempting container log timestamp flush...
May 11 11:49:12 186ab60 balena-supervisor[2353]: [debug]   Container log timestamp flush complete
May 11 11:49:15 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 9.980 ms
May 11 11:51:58 186ab60 balena-supervisor[2353]: [info]    Reported current state to the cloud
May 11 11:54:16 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 9.061 ms
May 11 11:56:58 186ab60 balena-supervisor[2353]: [info]    Reported current state to the cloud
May 11 11:59:12 186ab60 balena-supervisor[2353]: [debug]   Attempting container log timestamp flush...
May 11 11:59:12 186ab60 balena-supervisor[2353]: [debug]   Container log timestamp flush complete
May 11 11:59:17 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 7.006 ms
May 11 12:00:35 186ab60 balena-supervisor[2353]: [info]    VPN connection is active.
May 11 12:00:35 186ab60 balena-supervisor[2353]: [info]    VPN connection is active.
May 11 12:00:35 186ab60 balena-supervisor[2353]: [info]    VPN connection is active.
May 11 12:00:35 186ab60 balena-supervisor[2353]: [info]    VPN connection is active.
May 11 12:00:39 186ab60 balena-supervisor[2353]: [info]    Internet Connectivity: OK
May 11 12:01:59 186ab60 balena-supervisor[2353]: [info]    Reported current state to the cloud
May 11 12:02:37 186ab60 balena-supervisor[2353]: [api]     GET /v1/device 200 - 44.521 ms
May 11 12:02:38 186ab60 balena-supervisor[2353]: [api]     GET /v1/healthy 200 - 5.166 ms

Thank you for sharing what you found. I don’t see anything out of the ordinary :thinking:

  • Do you have other devices online and successfully pulling releases?
  • Is this the only device suffering this issue?
    • Is it on the same network as any of the other devices?

@myarmolinsky
I have 9 other devices (10 total), but this is the only device in this fleet…

I have tried moving it to another fleet - no luck
Createing new image usin balenaOS 2.113.18 (as the other devices) - no luck.

The other nine devices has also been setup at this location, so the network should be fine…

This error line appears at the start when restarting the device:

May 11 12:36:01 2e8277d balena-supervisor[2206]: Error response from daemon: No such container: resin_supervisor

Ah that is indeed a noteworthy error, allow me to look at some internal notes and see if I can find something relevant to it

I found some notes from a similar issue another user experienced. Unfortunately there’s not much more details with regards to the cause of this issue other than “most likely due to corruption”, though I cannot confidently say that this is the issue you are experiencing as there are more errors listed in the notes than you have shown. In any case, the solution to that previous issue appeared to be redownloading the supervisor container. Could you please try the following?:

# Stop the supervisor
systemctl stop balena-supervisor
# Remove the supervisor container
balena stop balena_supervisor. | xargs balena rm
# Get the supervisor image using balena images
# remove the image
balena rmi -f <image id>
# Do a system prune
balena system prune 
# Re-download the supervisor
update-balena-supervisor

@myarmolinsky
I tried that, please se the log:

root@2e8277d:~# systemctl stop balena-supervisor
root@2e8277d:~# balena stop balena_supervisor. | xargs balena rm
Error response from daemon: No such container: balena_supervisor.
"balena-engine rm" requires at least 1 argument.
See 'balena-engine rm --help'.

Usage:  balena-engine rm [OPTIONS] CONTAINER [CONTAINER...]

Remove one or more containers
root@2e8277d:~# balena system prune
WARNING! This will remove:
  - all stopped containers
  - all networks not used by at least one container
  - all dangling images
  - all dangling build cache

Are you sure you want to continue? [y/N] y
Deleted Containers:
27ed129e3e008dbc32f35a9cca61931e9ae5bdcbe89d0c139b0bda42aca2c128

Deleted Networks:
supervisor0

Total reclaimed space: 70B
root@2e8277d:~# update-balena-supervisor
Getting image name and version...
No supervisor configuration found from API.
Using preloaded values.
Set based on preloaded values image=registry2.balena-cloud.com/v2/61980ef3bcf727f847a4217aa453d8e2 and version=v14.9.4.
Getting image id...
Supervisor registry2.balena-cloud.com/v2/61980ef3bcf727f847a4217aa453d8e2 at version v14.9.4 already downloaded.
root@2e8277d:~# 

Changed the superviser version to the newest and ran the update command again. Looks like a successfull install but somethings not right…

root@2e8277d:~# update-balena-supervisor
Getting image name and version...
Getting image id...
Error: No such object: registry2.balena-cloud.com/v2/3e7c8f0004e7053c30c67557f0910958
Stop supervisor...
Pulling supervisor registry2.balena-cloud.com/v2/3e7c8f0004e7053c30c67557f0910958 at version latest...
Using default tag: latest
latest: Pulling from v2/3e7c8f0004e7053c30c67557f0910958
547446be3368: Already exists 
dd0daafddb80: Already exists 
147fe00c8e99: Pull complete 
650791e9acde: Pull complete 
bbe36cd1dfbd: Pull complete 
f8fd5bb4306e: Pull complete 
861ba878b02b: Pull complete 
165f086c21b9: Pull complete 
c280f26dba4d: Pull complete 
1884beb9be9b: Pull complete 
f80620569712: Pull complete 
2ea1f28244a7: Pull complete 
Digest: sha256:9d315e5f09925407d76b64d922a49c20047cbaf9aba146fb2ac353623ffa8230
Status: Downloaded newer image for registry2.balena-cloud.com/v2/3e7c8f0004e7053c30c67557f0910958:latest
registry2.balena-cloud.com/v2/3e7c8f0004e7053c30c67557f0910958:latest
Start supervisor...
root@2e8277d:~# balena stop balena_supervisor. | xargs balena rm
Error response from daemon: No such container: balena_supervisor.
"balena-engine rm" requires at least 1 argument.
See 'balena-engine rm --help'.

Usage:  balena-engine rm [OPTIONS] CONTAINER [CONTAINER...]

Remove one or more containers
root@2e8277d:~# 

root@2e8277d:~# balena stop balena_supervisor. | xargs balena rm
Error response from daemon: No such container: balena_supervisor.

Hmm perhaps this was a typo in the internal notes and the input was supposed to be balena stop balena-supervisor (not balena_supervisor

Could you please try again with balena-supervisor instead of balena_supervisor? As mentioned in the output, only stopped containers should be removed by balena system prune and it seems the previous command failed to stop the supervisor container

Still no luck:

Error response from daemon: No such container: balena-supervisor

root@2e8277d:~# balena stop balena_supervisor. | xargs balena rm
Error response from daemon: No such container: balena_supervisor.

Same errors as above, again please try replacing balena_supervisor with `balena-supervisor :slight_smile: :crossed_fingers:

Still no luck:

Error response from daemon: No such container: balena-supervisor

Hmm, I see :thinking: Thank you for trying that
I’ll keep looking through our internal notes and let you know what else I find

Just to confirm that the following isn’t the issue: I see there’s a dot in the command I pasted balena stop balena_supervisor., could you try it without the dot: balena stop balena-supervisor | xargs balena rm

It’s just odd that it doesn’t think the container exists, and I see the error specifically says: Error response from daemon: No such container: balena_supervisor. (note the . at the end, which I am not certain is part of the actual message)

@myarmolinsky
In my latest comment you can see the error without the .

I have tried with and without the ,

Ah okay, apologies for asking despite that, just wanted to make sure. Thank you for trying it.

Hey, sorry for not responding anymore yesterday. Could you please confirm whether the device is still in the same state as it started in after the attempts we have made to fix it so far?

@myarmolinsky
Yes it’s the same. I will try another SD-card later today…

Okay, please let us know how it goes. What was your reasoning for suspecting the SD card may be the issue btw?