Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Source: The memory leak seems still exist #4096

Closed
tonyye2018 opened this issue Jun 18, 2024 · 4 comments · Fixed by #4097
Closed

Source: The memory leak seems still exist #4096

tonyye2018 opened this issue Jun 18, 2024 · 4 comments · Fixed by #4097
Assignees
Labels
TransByAI Translated by AI/GPT.

Comments

@tonyye2018
Copy link

tonyye2018 commented Jun 18, 2024

The memory issue with SRS still appears to persist. I use RTMP to push streams to SRS, and then play them using WebRTC. After playing for a while, I close the stream, then push and play again, only to close it once more.
Version: 6.0 develop branch 2024.6.18
this is my test result:
unit is KB
Startup memory: SRS 19032.
datetime play 16 streams close all streams
13:58 77624 53152
14:03 79229 79220
14:06 95260 95260
14:13 96260 96260
14:18 97256 97256
14:28 98428 98428
14:35 99500 99500
14:42 100768 100768
14:53 101780 101900
14:59 104476 104476
15:33 107744 107744

TRANS_BY_GPT4

@winlinvip winlinvip added the TransByAI Translated by AI/GPT. label Jun 18, 2024
@winlinvip
Copy link
Member

What's your SRS config?

@winlinvip winlinvip changed the title The memory leak seems still exist Source: The memory leak seems still exist Jun 19, 2024
@winlinvip winlinvip self-assigned this Jun 19, 2024
@tonyye2018
Copy link
Author

tonyye2018 commented Jun 20, 2024

I use the rtmp2rtc.conf or srs.conf which enable rtmp_to_rtc.

http_server {
    enabled         on;
    listen          8080;
    dir             ./objs/nginx/html;
}

http_api {
    enabled         on;
    listen          1985;
}
stats {
    network         0;
}
rtc_server {
    enabled on;
    listen 8000; # UDP port
    # @see https://ossrs.net/lts/zh-cn/docs/v4/doc/webrtc#config-candidate
    candidate $CANDIDATE;
}

vhost __defaultVhost__ {
    rtc {
        enabled     on;
        # @see https://ossrs.net/lts/zh-cn/docs/v4/doc/webrtc#rtmp-to-rtc
        rtmp_to_rtc on;
        # @see https://ossrs.net/lts/zh-cn/docs/v4/doc/webrtc#rtc-to-rtmp
        rtc_to_rtmp on;
    }
    http_remux {
        enabled     on;
        mount       [vhost]/[app]/[stream].flv;
    }
}
or srs.conf
listen              1935;
max_connections     1000;
srs_log_tank        file;
#srs_log_file        ./objs/srs.log;
daemon              on;
http_api {
    enabled         on;
    listen          1985;
}
http_server {
    enabled         on;
    listen          8080;
    dir             ./objs/nginx/html;
}
rtc_server {
    enabled on;
    listen 8000; # UDP port
    # @see https://ossrs.net/lts/zh-cn/docs/v4/doc/webrtc#config-candidate
    candidate $CANDIDATE;
}
vhost __defaultVhost__ {
    hls {
        enabled         on;
    }
    http_remux {
        enabled     on;
        mount       [vhost]/[app]/[stream].flv;
    }
    rtc {
        enabled     on;
        # @see https://ossrs.net/lts/zh-cn/docs/v4/doc/webrtc#rtmp-to-rtc
        rtmp_to_rtc on;
        # @see https://ossrs.net/lts/zh-cn/docs/v4/doc/webrtc#rtc-to-rtmp
        rtc_to_rtmp off;
    }

    play{
        gop_cache_max_frames 2500;
    }
}

@tonyye2018
Copy link
Author

I use the version which you merged newest code on morning.
I get the same result.From the srs.log, I found "free rtc source id=[4t2o11ll]", it seems rtc source had been freed.
But the memory is still growing.

@winlinvip
Copy link
Member

winlinvip commented Jun 21, 2024

Memory growth does not necessarily indicate a leak; it could be due to system caching, among other things. You can conduct tests for over an hour, and if the increase persists, then it is likely a leak.

TRANS_BY_GPT4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
TransByAI Translated by AI/GPT.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants