-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory usage difference between v1.2.0 and v1.1.45 #16703
Comments
Thanks for recreating. The main suspicious thing here is the 2,000+ request & timeout objects. Something is keeping the request and the timeouts alive. It's likely something to do with timeouts. Can you paste the code with the timeouts? |
I think this is probably the bit you want. This is an attempt to recreate the code before Bun that set timeouts longer than the ALB timeout to prevent some issues. The timeout can't be set over 255 so someone suggested this. The code between those 2 runs was the same except for the Bun version, they were compiled in an executable using the respective version. server.timeout(req, 60);
const timeoutID = setTimeout(
() => {
server.timeout(req, 255);
},
(55) * 1000,
);
req.signal.onabort = () => {
clearTimeout(timeoutID);
}; |
If you do server.timeout(req, 0) it should disable the timeout entirely |
I would delete all of that code and replace it with timeout 0 and the idea would be my server would never initiate a close connection, but the load balancer would close on its end causing the connection to end? I will give that a try in the testing envs over the weekend and then prod on Monday or Tuesday. |
According to AWS memory usage doubled from 200 to 400MB between these 2, but Bun reports heapsize of 28,119,669 in the first, and 27,039,942 in the second. This is with the timeout code removed and idleTimeout set to 0. Not sure if those heapsizes are expected but seemed odd to me. The memory usage is not crazy here since these instances aren't getting many requests but these instances are odd when compared to the others I have running. You can see the dates/times (UTC) in the filenames. Tomorrow I'll try running 2025-01-24T20_47_01.460Z-01949a11-6f53-7000-8cad-d42a5a1b0155-stats.json |
Here's the out of {"rss":658104320,"heapTotal":12456960,"heapUsed":29612614,"external":5337464,"arrayBuffers":84061,"time":"19:47:03.129"}
{"rss":684953600,"heapTotal":12721152,"heapUsed":33391327,"external":5370810,"arrayBuffers":85782,"time":"20:47:06.217"}
{"rss":730443776,"heapTotal":12853248,"heapUsed":33291671,"external":5365750,"arrayBuffers":79334,"time":"21:47:04.093"}
{"rss":750948352,"heapTotal":12727296,"heapUsed":34562297,"external":5357768,"arrayBuffers":77770,"time":"22:47:07.173"}
{"rss":771829760,"heapTotal":12631040,"heapUsed":18735609,"external":5376551,"arrayBuffers":88560,"time":"23:47:05.261"}
{"rss":800010240,"heapTotal":12566528,"heapUsed":29387576,"external":5387551,"arrayBuffers":89896,"time":"00:47:04.162"}
{"rss":829104128,"heapTotal":12472320,"heapUsed":30541938,"external":5425375,"arrayBuffers":83850,"time":"01:47:07.806"}
{"rss":862580736,"heapTotal":12428288,"heapUsed":27439153,"external":5424976,"arrayBuffers":80288,"time":"02:47:05.531"}
{"rss":870420480,"heapTotal":12852224,"heapUsed":17954457,"external":5405946,"arrayBuffers":79018,"time":"03:47:08.721"}
{"rss":901648384,"heapTotal":12684288,"heapUsed":20220778,"external":5446117,"arrayBuffers":103359,"time":"04:47:07.304"}
{"rss":944623616,"heapTotal":12745728,"heapUsed":23914580,"external":5411371,"arrayBuffers":84705,"time":"05:47:05.278"}
{"rss":964210688,"heapTotal":12645376,"heapUsed":34757512,"external":5427839,"arrayBuffers":77741,"time":"06:47:08.830"} |
@Gobd could you try in canary ( |
I have been able to confirm that 5819fe4 is the commit that introduced the issue that I am having. I have no idea what in there did it, the service having is this issue is a basic API that uses the latest versions of NPM packages redis (client only for basic get/set), mysql2, hono, pino, fast-jwt, amqplib, and amqp-connection-manager and Bun fetch to create its responses. These tests were done with Canary still isn't working, but Here is the commit above on the left, and the one before it on the right. There are funny bumps because this has |
What version of Bun is running?
1.2.0+b0c5a7655
What platform is your computer?
No response
What steps can reproduce the bug?
Unsure
What is the expected behavior?
Same memory usage as v1.1.45.
What do you see instead?
The left of the screenshot is v1.2, the right is v1.1.45 there is a big difference in memory usage. 100% would be 2GB. The heapsnapshot doesn't seem to reflect the memory usage. I'll need to send the heapsnapshot directly as it contains private info.
2025-01-24T13_09_37.266Z-0194956d-929c-7000-bf05-4300e47ff5f4-stats.json
Additional information
No response
The text was updated successfully, but these errors were encountered: