Node-red-Dashboard Memory leak

This may not be specifically related to node-red but I am using the resin-node-red and resin-node-red-electron for a couple of projects and would like to raise an issue I have been having with memory leakage. My suspicion is it is coming from the dashboard and am following this github issue in an attempt to debug the issue. My project is located here.

resin-node-red-dashboard-memoryleak

It could be I am not saving my data in the right place and will be looking into that next. I will be doing more debugging and testing to isolate the issue and will update this thread as I make progress.

Might have my answer, after reworking the chart parameters it seems to have stabilized. resin-node-red-dashboard-memoryleak-chart-rework

Hi, glad to hear that you had some improvements. Do you have any details on how did you rework the chart parameters / what have you changed in your application to have that improvement? Does the stability hold? It does look like all this is just on the node-red side, but let us know if anything on the resin side would help debug the problem.

Heres another pic of the memory usage for dashboard charts. 2017-11-11_02-00-07

Whats I suspect is going on is my various sensor graphs, some reading every second are filling up memory and I am tweaking my requirements to compensate. Specifically on the dashboard chart node I am limiting the number of datapoints per time interval and that has show some improvement. In the above picture you can see the small drops in memory when the hour kicks over.

I have turned off the incoming data and will be observing the memory usage with only the memory usage graph on and continue observing the behavior.

I have formulated a question related to the resin.io part of this issue. How can you resin to monitor and manage the memory in my application containers? Can you point me to the memory management components in ResinOS?

I brought another pi3 online to run just the mem usage flow and it is using 90% of the available memory.

@imrehg memory usage has been completely stabilized after the latest resinOS update to 2.7.5.

Glad to hear, and thanks for looking into this low level so deeply. What version have you used previously? Our guess is that either the balena changes or Supervisor improvements are the reason for the stability. Glad that it shows improvements on the resinOS side :slight_smile:

My projects were on 2.3.0 previously.

I think since then both docker in resinOS and the resin supervisor was updated, so indeed, glad to see that there are actual practical improvements (here is the resinOS changelog and resin-supervisor changelog just for reference).

Thanks for the feedback!

hmm i think i have the same problem.
But i thought it was node-red-dashboard. But i removed if from my project.
I think its still in node red or something.
in a couple of hours the memory fills up. Is there a way to find out where the memory goes and why it fills up?
i have tried already to add the param: max_old_space_size=128 in the start.sh like this:

# Start app
node-red --max_old_space_size=128 --trace-gc --settings /usr/src/app/settings.js 

Because before i got a error like:

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed

Has anyone an idea to stop the app restarting after 4 hours when memory is full?

What version of ResinOS and that project are you running @RDB, and have you applied the tweaks the kanr mentions?

i just updated to the latest Resin os Resin OS 2.12.7+rev1 and when noticing the problem i was on: Resin OS 2.12.5+rev1.

@kanr what node did you use to check the system memory usage? I just added the node-red-contrib-gc node to see if that gives me some insight.

I have added the node-red-contrib-gc and node-red-contrib-os to my project and am logging the output to a chart.

Both of the systems are running the same application made with node red.
They are both reading and writing modbus devices over serial. The one that has a stable memory consumption has 3 modbus devices linked.

And the device with the memory leak has 5 modbus devices linked.

Does anyone has an idea how to determine in what function or node the leak occurs?

I have changed some flows i had running and removed the delay nodes with the drop message checkbox. This has solved my problem. I think to much messages where dropped and the GC couldn’t remove it in time.