You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For long running Beamcoder operations, memory will increase until OOM. JS heap does not seem to be affected, just RSS size.
Confirmed with Node v14.16, v15, and v16.13.2
You can reproduce with a simple demux loop that does nothing:
//leaktest.js - leave running to see leak
const beamcoder = require('beamcoder');
async function run() {
demux = await beamcoder.demuxer({ url: <long running stream here rtsp/rtmp/etc> });
while(true) {
const packet = await demux.read();
}
}
run();
// end leaktest.js
To troubleshoot I first changed line 244 in demux.cc to exclude the libav read and replace with just a packet alloc:
//ret = av_read_frame(fmtCtx, c->packet);
ret = av_packet_alloc();
The memory leak then goes away. So somehow some packet data/struct is not being freed correctly after this point.
In further testing, this leak also presents in decode/encode/filter/mux operations. So there is some common denominator.
Note that packet data size does not appear to affect the leak size.
Processing 10 streams concurrently through a pipeline that does demux-> decode-> filter-> encode-> mux, will use many gigabytes of ram after 12 hours or so.
As far as I can tell the leak is somewhere between 200-900b per packet/frame processed.
Could this have something to do with a malloc() happening inside a different thread then the release?
I'm not experienced enough in the multi-threading setup under node to trace this properly.
Things I've tried:
Using a worker thread for each packet read operation (how it works currently)
RESULT: leak
Converted demuxer.readFrame() into a sync function (instead of using napi_create_async_work to send work to the thread pool).
RESULT: No leak
Create one long lived thread using napi_create_async_work, then output packets within thread to js using napi_call_threadsafe_function()
RESULT: No leak
Of course the above leak fixes only relate to the demuxer. I'm not sure how to implement similar changes in the other objects like filter or muxer. It also do not make it clear to me where the actual issue is?
For long running Beamcoder operations, memory will increase until OOM. JS heap does not seem to be affected, just RSS size.
Confirmed with Node v14.16, v15, and v16.13.2
You can reproduce with a simple demux loop that does nothing:
To troubleshoot I first changed line 244 in demux.cc to exclude the libav read and replace with just a packet alloc:
The memory leak then goes away. So somehow some packet data/struct is not being freed correctly after this point.
In further testing, this leak also presents in decode/encode/filter/mux operations. So there is some common denominator.
Note that packet data size does not appear to affect the leak size.
Processing 10 streams concurrently through a pipeline that does demux-> decode-> filter-> encode-> mux, will use many gigabytes of ram after 12 hours or so.
As far as I can tell the leak is somewhere between 200-900b per packet/frame processed.
Could this have something to do with a malloc() happening inside a different thread then the release?
I'm not experienced enough in the multi-threading setup under node to trace this properly.
Things I've tried:
RESULT: leak
napi_create_async_work
to send work to the thread pool).RESULT: No leak
napi_create_async_work
, then output packets within thread to js usingnapi_call_threadsafe_function()
RESULT: No leak
Of course the above leak fixes only relate to the demuxer. I'm not sure how to implement similar changes in the other objects like filter or muxer. It also do not make it clear to me where the actual issue is?
@scriptorian do you have any insight into this?
The text was updated successfully, but these errors were encountered: