Hi, do you see any errors or oddities in the device logs? Please also check the supervisor logs (open the HostOS terminal and run journalctl -n 100 -u balena-supervisor) and let us know of any errors or oddities there as well. It’s also worth going to the device Diagnostics page and running the device health checks and diagnostics to check those for errors.
@myarmolinsky
Diagnostics and health check passed… The HostOS terminal log shows:
root@186ab60:~# journalctl -n 100 -u balena-supervisor
May 11 10:09:16 186ab60 balena-supervisor[2353]: [warn] Invalid firewall mode: 0. Reverting to state: off
May 11 10:09:16 186ab60 balena-supervisor[2353]: [info] Applying firewall mode: off
May 11 10:09:16 186ab60 balena-supervisor[2353]: [success] Firewall mode applied
May 11 10:09:16 186ab60 balena-supervisor[2353]: [debug] Starting api binder
May 11 10:09:17 186ab60 balena-supervisor[2353]: [debug] Performing database cleanup for container log timestamps
May 11 10:09:17 186ab60 balena-supervisor[2353]: (node:1) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and us>
May 11 10:09:17 186ab60 balena-supervisor[2353]: (Use `node --trace-deprecation ...` to show where the warning was created)
May 11 10:09:17 186ab60 balena-supervisor[2353]: [info] API Binder bound to: https://api.balena-cloud.com/v6/
May 11 10:09:17 186ab60 balena-supervisor[2353]: [event] Event: Supervisor start {}
May 11 10:09:17 186ab60 balena-supervisor[2353]: [info] Starting API server
May 11 10:09:17 186ab60 balena-supervisor[2353]: [info] Supervisor API successfully started on port 48484
May 11 10:09:17 186ab60 balena-supervisor[2353]: [debug] Ensuring device is provisioned
May 11 10:09:18 186ab60 balena-supervisor[2353]: [debug] Connectivity check enabled: true
May 11 10:09:18 186ab60 balena-supervisor[2353]: [debug] Starting periodic check for IP addresses
May 11 10:09:18 186ab60 balena-supervisor[2353]: [info] Reporting initial state, supervisor version and API info
May 11 10:09:18 186ab60 balena-supervisor[2353]: [debug] VPN status path exists.
May 11 10:09:18 186ab60 balena-supervisor[2353]: [debug] Starting current state report
May 11 10:09:18 186ab60 balena-supervisor[2353]: [debug] Starting target state poll
May 11 10:09:18 186ab60 balena-supervisor[2353]: [info] VPN connection is active.
May 11 10:09:18 186ab60 balena-supervisor[2353]: [info] Waiting for connectivity...
May 11 10:09:18 186ab60 balena-supervisor[2353]: [debug] Skipping preloading
May 11 10:09:18 186ab60 balena-supervisor[2353]: [debug] Spawning journalctl -a --follow -o json _SYSTEMD_UNIT=balena.service
May 11 10:09:18 186ab60 balena-supervisor[2353]: [info] Applying target state
May 11 10:09:20 186ab60 balena-supervisor[2353]: [info] Reported current state to the cloud
May 11 10:09:21 186ab60 balena-supervisor[2353]: [warn] Failed to read docker configuration of network supervisor0: (HTTP code 404>
May 11 10:09:21 186ab60 balena-supervisor[2353]: [debug] Creating supervisor0 network
May 11 10:09:22 186ab60 balena-supervisor[2353]: [debug] Finished applying target state
May 11 10:09:22 186ab60 balena-supervisor[2353]: [success] Device state apply success
May 11 10:09:22 186ab60 balena-supervisor[2353]: [info] Applying target state
May 11 10:09:22 186ab60 balena-supervisor[2353]: [debug] Finished applying target state
May 11 10:09:22 186ab60 balena-supervisor[2353]: [success] Device state apply success
May 11 10:09:28 186ab60 balena-supervisor[2353]: [info] Internet Connectivity: OK
May 11 10:13:59 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 33.736 ms
May 11 10:14:21 186ab60 balena-supervisor[2353]: [info] Reported current state to the cloud
May 11 10:19:00 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 13.395 ms
May 11 10:19:12 186ab60 balena-supervisor[2353]: [debug] Attempting container log timestamp flush...
May 11 10:19:12 186ab60 balena-supervisor[2353]: [debug] Container log timestamp flush complete
May 11 10:19:22 186ab60 balena-supervisor[2353]: [info] Reported current state to the cloud
May 11 10:24:00 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 10.626 ms
May 11 10:29:01 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 14.496 ms
May 11 10:29:12 186ab60 balena-supervisor[2353]: [debug] Attempting container log timestamp flush...
May 11 10:29:12 186ab60 balena-supervisor[2353]: [debug] Container log timestamp flush complete
May 11 10:34:02 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 14.030 ms
May 11 10:37:30 186ab60 balena-supervisor[2353]: [info] Reported current state to the cloud
May 11 10:39:03 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 13.075 ms
May 11 10:39:12 186ab60 balena-supervisor[2353]: [debug] Attempting container log timestamp flush...
May 11 10:39:12 186ab60 balena-supervisor[2353]: [debug] Container log timestamp flush complete
May 11 10:42:30 186ab60 balena-supervisor[2353]: [info] Reported current state to the cloud
May 11 10:44:04 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 10.293 ms
May 11 10:49:05 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 10.501 ms
May 11 10:49:12 186ab60 balena-supervisor[2353]: [debug] Attempting container log timestamp flush...
May 11 10:49:12 186ab60 balena-supervisor[2353]: [debug] Container log timestamp flush complete
May 11 10:54:06 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 15.315 ms
May 11 10:59:06 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 20.732 ms
May 11 10:59:07 186ab60 balena-supervisor[2353]: [info] Reported current state to the cloud
May 11 10:59:12 186ab60 balena-supervisor[2353]: [debug] Attempting container log timestamp flush...
May 11 10:59:12 186ab60 balena-supervisor[2353]: [debug] Container log timestamp flush complete
May 11 11:04:07 186ab60 balena-supervisor[2353]: [info] Reported current state to the cloud
May 11 11:04:07 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 12.119 ms
May 11 11:09:08 186ab60 balena-supervisor[2353]: [info] Reported current state to the cloud
May 11 11:09:08 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 11.086 ms
May 11 11:09:12 186ab60 balena-supervisor[2353]: [debug] Attempting container log timestamp flush...
May 11 11:09:12 186ab60 balena-supervisor[2353]: [debug] Container log timestamp flush complete
May 11 11:14:08 186ab60 balena-supervisor[2353]: [info] Reported current state to the cloud
May 11 11:14:09 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 8.854 ms
May 11 11:19:10 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 14.032 ms
May 11 11:19:12 186ab60 balena-supervisor[2353]: [debug] Attempting container log timestamp flush...
May 11 11:19:12 186ab60 balena-supervisor[2353]: [debug] Container log timestamp flush complete
May 11 11:24:11 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 15.619 ms
May 11 11:27:10 186ab60 balena-supervisor[2353]: [info] Reported current state to the cloud
May 11 11:29:12 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 13.635 ms
May 11 11:29:12 186ab60 balena-supervisor[2353]: [debug] Attempting container log timestamp flush...
May 11 11:29:12 186ab60 balena-supervisor[2353]: [debug] Container log timestamp flush complete
May 11 11:32:10 186ab60 balena-supervisor[2353]: [info] Reported current state to the cloud
May 11 11:34:13 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 8.245 ms
May 11 11:39:12 186ab60 balena-supervisor[2353]: [debug] Attempting container log timestamp flush...
May 11 11:39:12 186ab60 balena-supervisor[2353]: [debug] Container log timestamp flush complete
May 11 11:39:13 186ab60 balena-supervisor[2353]: [info] Reported current state to the cloud
May 11 11:39:13 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 11.750 ms
May 11 11:44:14 186ab60 balena-supervisor[2353]: [info] Reported current state to the cloud
May 11 11:44:14 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 8.787 ms
May 11 11:48:10 186ab60 balena-supervisor[2353]: [info] VPN connection is not active.
May 11 11:48:19 186ab60 balena-supervisor[2353]: [info] Waiting for connectivity...
May 11 11:49:12 186ab60 balena-supervisor[2353]: [debug] Attempting container log timestamp flush...
May 11 11:49:12 186ab60 balena-supervisor[2353]: [debug] Container log timestamp flush complete
May 11 11:49:15 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 9.980 ms
May 11 11:51:58 186ab60 balena-supervisor[2353]: [info] Reported current state to the cloud
May 11 11:54:16 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 9.061 ms
May 11 11:56:58 186ab60 balena-supervisor[2353]: [info] Reported current state to the cloud
May 11 11:59:12 186ab60 balena-supervisor[2353]: [debug] Attempting container log timestamp flush...
May 11 11:59:12 186ab60 balena-supervisor[2353]: [debug] Container log timestamp flush complete
May 11 11:59:17 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 7.006 ms
May 11 12:00:35 186ab60 balena-supervisor[2353]: [info] VPN connection is active.
May 11 12:00:35 186ab60 balena-supervisor[2353]: [info] VPN connection is active.
May 11 12:00:35 186ab60 balena-supervisor[2353]: [info] VPN connection is active.
May 11 12:00:35 186ab60 balena-supervisor[2353]: [info] VPN connection is active.
May 11 12:00:39 186ab60 balena-supervisor[2353]: [info] Internet Connectivity: OK
May 11 12:01:59 186ab60 balena-supervisor[2353]: [info] Reported current state to the cloud
May 11 12:02:37 186ab60 balena-supervisor[2353]: [api] GET /v1/device 200 - 44.521 ms
May 11 12:02:38 186ab60 balena-supervisor[2353]: [api] GET /v1/healthy 200 - 5.166 ms
I found some notes from a similar issue another user experienced. Unfortunately there’s not much more details with regards to the cause of this issue other than “most likely due to corruption”, though I cannot confidently say that this is the issue you are experiencing as there are more errors listed in the notes than you have shown. In any case, the solution to that previous issue appeared to be redownloading the supervisor container. Could you please try the following?:
# Stop the supervisor
systemctl stop balena-supervisor
# Remove the supervisor container
balena stop balena_supervisor. | xargs balena rm
# Get the supervisor image using balena images
# remove the image
balena rmi -f <image id>
# Do a system prune
balena system prune
# Re-download the supervisor
update-balena-supervisor
root@2e8277d:~# systemctl stop balena-supervisor
root@2e8277d:~# balena stop balena_supervisor. | xargs balena rm
Error response from daemon: No such container: balena_supervisor.
"balena-engine rm" requires at least 1 argument.
See 'balena-engine rm --help'.
Usage: balena-engine rm [OPTIONS] CONTAINER [CONTAINER...]
Remove one or more containers
root@2e8277d:~# balena system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all dangling build cache
Are you sure you want to continue? [y/N] y
Deleted Containers:
27ed129e3e008dbc32f35a9cca61931e9ae5bdcbe89d0c139b0bda42aca2c128
Deleted Networks:
supervisor0
Total reclaimed space: 70B
root@2e8277d:~# update-balena-supervisor
Getting image name and version...
No supervisor configuration found from API.
Using preloaded values.
Set based on preloaded values image=registry2.balena-cloud.com/v2/61980ef3bcf727f847a4217aa453d8e2 and version=v14.9.4.
Getting image id...
Supervisor registry2.balena-cloud.com/v2/61980ef3bcf727f847a4217aa453d8e2 at version v14.9.4 already downloaded.
root@2e8277d:~#
root@2e8277d:~# balena stop balena_supervisor. | xargs balena rm
Error response from daemon: No such container: balena_supervisor.
Hmm perhaps this was a typo in the internal notes and the input was supposed to be balena stop balena-supervisor (not balena_supervisor
Could you please try again with balena-supervisor instead of balena_supervisor? As mentioned in the output, only stopped containers should be removed by balena system prune and it seems the previous command failed to stop the supervisor container
Just to confirm that the following isn’t the issue: I see there’s a dot in the command I pasted balena stop balena_supervisor., could you try it without the dot: balena stop balena-supervisor | xargs balena rm
It’s just odd that it doesn’t think the container exists, and I see the error specifically says: Error response from daemon: No such container: balena_supervisor. (note the . at the end, which I am not certain is part of the actual message)
Hey, sorry for not responding anymore yesterday. Could you please confirm whether the device is still in the same state as it started in after the attempts we have made to fix it so far?