Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak with TriggerClientEvent #3114

Open
niCe86 opened this issue Jan 30, 2025 · 5 comments
Open

Memory leak with TriggerClientEvent #3114

niCe86 opened this issue Jan 30, 2025 · 5 comments
Labels
bug triage Needs a preliminary assessment to determine the urgency and required action

Comments

@niCe86
Copy link

niCe86 commented Jan 30, 2025

What happened?

When sending tables through TriggerClientEvent, memory is leaking. It is probably more noticeable with larger tables.

I'm using TriggerClientEvent a lot and my server RAM usage gradually increases from 500 MB to up to 10-15 GB at the end of the day (FXServer is being restarted daily).

Expected result

When sending table through TriggerClientEvent, allocated memory should be cleared.

Reproduction steps

collectgarbage('stop')
print(1, collectgarbage('count'))

local data = {}
for i = 1, 100000 do
data[i] = i
end

print(2, collectgarbage('count'))

for i = 1, 100 do
TriggerClientEvent("Test", playerid, data)
end

print(3, collectgarbage('count'))

data = nil
collectgarbage('collect')

print(4, collectgarbage('count'))

Importancy

Unknown

Area(s)

FXServer

Specific version(s)

Tested on FiveM artifact 12180 and 12651
I have confirmation from IllidanS4 that on some ancient artifact from 2022 this doesn't occur

Additional information

The garbage collect clears all the garbage, but RAM usage won't decrease.

Example of my results:
[ script:wtls] 1 354979.76171875
[ script:wtls] 2 359075.85546875
[ script:wtls] 3 395079.3125
[ script:wtls] 4 350609.63769531

Example of actual RAM usage:

  • 560 MB RAM at the beginning
  • 629 MB RAM after execution of the code above
  • 580 MB RAM after few seconds and another manual garbage collect (just to be sure)
  • 568 MB RAM after resource restart

You can easily repeat the test script and the RAM will gradually increase. I have suspicion, that this memory leak might somehow degrade CPU performance as well?

@niCe86 niCe86 added bug triage Needs a preliminary assessment to determine the urgency and required action labels Jan 30, 2025
@d22tny
Copy link

d22tny commented Feb 3, 2025

adding some data here.

server start:

Image

800 MB ~ 60 players

Image

server gets full ~ 1250 players -> 22GB

Image

6 hours are passing by -> 40GB

Image

morning ~ 70 players -> 11.7 GB

so there was a leak of about 10GB over 9 hours.

@joaoconti
Copy link

joaoconti commented Feb 3, 2025

I'm facing the same problem. It reaches 19GB; as soon as I open the server, it starts increasing by 2 to 4 MB per second as players join.
Image

20 minutes after the first post, it's already at 4GB.
Image

kicked all the players from the server and started stopping each resource one by one to see what was causing the memory consumption. I stopped all the resources, but it was still using 5GB.
Image

@niCe86
Copy link
Author

niCe86 commented Feb 3, 2025

Are you both using the TriggerClientEvent frequently or transfering large data with it?

@joaoconti
Copy link

@niCe86 I use it frequently

@Yum1x
Copy link

Yum1x commented Feb 4, 2025

That's a normal behavior when u stop the gc like u did in provided repro, u should call collectgarbage("restart") to restore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug triage Needs a preliminary assessment to determine the urgency and required action
Projects
None yet
Development

No branches or pull requests

4 participants