Vpn service takes lot of RAM

Hi,

I am setting up a balena test enviroment where I am connecting a few devices, and I noticed vpn and api services are taking quite a lot of ram.
Especially the vpn docker ram footprint seems to increase over time…

This is the amount of memory it takes after 27h uptime:

Any suggestion? Thanks

I was looking at a VPN machine in a staging environment as an (arguable) basis for comparison, and (coincidentally?) the VPN container was taking around 350MB RAM which is not too different from the 387MB you’ve shared in the screenshot. The VPN container I looked at was “up” for 6 days. I wonder if it could be a case of some memory caches in the VPN implementation that grow up to a certain limit, then starts discarding older entries so stops growing. If you keep it running for longer, like 2 or 3 days, it just might stabilise.

Hi,

I just checked the stats…

vpn_1 0.02% 800.3MiB / 3.852GiB 20.29% 104MB / 92.7MB 80.3MB / 135kB 49

It really keeps growing and growing… (800.3MiB / 3.852GiB)

The machine was up for 3 days and it’s the last version (8.15.1 if I recall correctly).

Some more info: I currently have 2/3 devices connected on average and make use of the connect proxy quite a bit (I wouldn’t say heavily thought).

Thank you for your support.
Federico

Hi,

any news on this issue?

Best,
Federico

What version of the supervisor are you seeing this on?

Do you mean the supervisor on the iot?
It should be 9.9.0

What board are you running on then?

All my devices are at 9.15.7, or 9.15 if they have a production image.

I was going to check this out and confirm, but I can’t replicate.

Maybe try a scheduled reboot from within a container?

Good luck!

It is running on a Siemens iot2000 if it makes any difference…

I set a limit on the available ram of the vpn docker, it restarts when the ram is full, I dont really like this though…

Hi,
I am asking our VPN expert for knowledge. He will get back to you as soon as possible.
Kind regards,
Theodor

Hi,

While the VPN service is quite memory hungry, I’m not aware of any memory leaks in the stack currently. In production we do observe a relatively fast growth of memory usage after a deploy but that flattens out and doesn’t appear to continue to grow.

I will setup an open instance here and leave it running for a few days, ideally with some automated traffic, and see if I can track down any memories issues. I’ll get back to you if I discover anything relevant.

Thanks.