I have the latest supervisor version. This is my balena stats:
c166ad4646c2 lights_2_1_localrelease 31.98% 142.9MiB / 3.754GiB 3.72% 0B / 0B 21.7MB / 0B 30
d3a2af7176be session-supervisor_6_1_localrelease 0.19% 61.42MiB / 3.754GiB 1.60% 0B / 0B 1.89MB / 0B 33
0a95f5dc2fb5 access_1_1_localrelease 0.16% 55.87MiB / 3.754GiB 1.45% 0B / 0B 1.6MB / 0B 34
c03488ca7dcf balena_supervisor 2.19% 2.737GiB / 3.754GiB 72.90% 0B / 0B 55.1MB / 9.9MB 12
What can I do to prevent this?
1 Like
Hi @tommyzat,
That’s interesting, can you send some more info about the device? Which device type, OS version and Supervisor version are you running? How long has the device been on? Is it connected to Balena Cloud, Open Balena or running locally?
If you are connected to the Cloud you can see most of this information in the Cloud dashboard. Alternatively running cat /etc/os-release
on the root of the device will get us a bunch of helpful info to start too.
Hi @tommyzat,
As requested by my colleague earlier, please do share supervisor version and the contents of /etc/os-release
. Please do share the following information as well:
- How long had the supervisor process run when you noticed the high memory numbers
- How much memory was reported consumed when the container started. This number will allow us to figure out the rate of leakage.
- The important memory number for any process is the RSS. Output of the following command (on the host) should dump that info:
balena exec -it balena_supervisor top -mbn1 | grep -E 'PID|node'
. The RSS
column is in kibibytes (unless suffixed by an m
).
Answers to the above questions will help us move in the right direction.
Thanks and regards,
Pranav