-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory usage #3782
Comments
Hi @marspark! It looks like you missed a step or two when you created your issue. Please edit your comment (use the pencil icon at the top-right corner of the comment box) and fix the following:
As soon as those items are rectified, post a new comment (e.g. “Ok, fixed!”) below and we'll take a look. Thanks! If you feel this message is in error, or you want to debate the merits of my existence (sniffle), please contact [email protected]. |
@marspark I've spent a lot of time testing v0.12.1, v0.12.2, and v0.12.3 for memory leaks (with the help of @particlebanana, @sgress454, and others), and so far, we have never been able to reproduce a memory leak. Of course it's always possible we're missing something, so I'm curious to see what you're running across. That said, we have tested the exact scenario you're describing, the only difference being that we tested on Mac OS instead of Ubuntu. So please bear with me-- I don't mean to cause any offense, just need to get to the bottom of it as efficiently as possible. First off, I saw you posted in the other issue: #2779 (comment) If you are, then the next step is to make sure we're on the same page about what a memory leak is: When you start looking at this kind of stuff, it can sometimes be really difficult to know what is and isn't a leak. (At least I know it used to confuse me quite a bit!) To be clear: a Node process will grow its memory usage until it decides it's time to run the garbage collector. Just because memory is going up continually does not mean there is a memory leak-- it means that the garbage collector has not run yet. To show what I mean, take a look at the notes I wrote up on how to analyze memory usage here: https://github.com/balderdashy/waterline/issues/1323#issuecomment-214025388 Next, take a look at the graphs @particlebanana posted here: https://github.com/balderdashy/waterline/issues/1323#issuecomment-214554282 For more technical background and notes on the garbage collector in Node.js, you might also check out: So if, after reading the stuff above, it turns out that we weren't on the same page-- no problem! On the other hand, if you're confident that there's a memory leak here, then please continue on below. Testing for memory leaksThe only surefire way to diagnose a memory leak is to run your Node process with the 1. Expose gc endpointExpose a development-only endpoint that, when called, will run the garbage collector. For example: https://github.com/balderdashy/sails-hook-dev/blob/master/index.js#L190-L203 2. MonitorStart up a program that monitors your Node process's memory. I recommend NodeSource. 3. LiftLift your app with all recommended production settings (see deployment and scaling docs on sailsjs.org). But also make sure you tell Node.js to allow your code programmatic use of the garbage collector. For example, I like to do this by running: NODE_ENV=production node --expose-gc app.js 4. Do stuff or run load tests (round 1)Now perform the behavior that you suspect will cause a memory leak. In this example, that would be sending 5. Force garbage collector to run (round 1)Now hit your endpoint that runs the garbage collector. You'll see the graphs drop dramatically after a moment. Take note of the lowest point in the graph (the "valley"). This is the amount of memory that the process is using, and that could not be reclaimed using the garbage collector. This is the memory where stuff you're actually using lives-- e.g. the 6. Do stuff or run load tests (round 2)Now do exactly the same thing we did in step 4 again, for another 30 minutes. 7. Force garbage collector to run (round 2)Now do exactly the same thing we did in step 5 again. After a moment, if you notice that the "valley" in the graph is significantly higher than it was in step 5, there might be something going on (there seems to be some amount of natural variation in what the garbage collector can actually reclaim-- in some cases this second "valley" is actually lower). If you're reporting a suspected memory leak, please be sure to take a second screenshot of the graphs and memory usage in GB at this point. Finally, if after running through the steps above, it seems likely that there is a memory leak (i.e. the second "valley" was significantly higher than the first), then please repeat steps 6 and 7 one more time to be sure. If there is a memory leak, you'd expect the second "valley" to be significantly higher than the first, and the third "valley" to be significantly higher than the second. If you notice that this is the case, then please let me know ASAP in this issue. Thanks for the help! |
@mikermcneil Hi Mike, thank you for your reply. The problem I'm encountering is not a memory leak, it's just that the garbage collector won't run as long as the sails server stays online. I was using sails 0.9.7 on my other server and it triggers garbage collector correctly therefore I'm wondering if there's anything I can do to make sails 0.12.3 behaviors the same way. Thank you! |
@marspark Ah ok, I think I understand the mixup now. Sails doesn't decide when to run the garbage collector-- your Node process runs the garbage collector whenever it wants. In fact, it couldn't control that even if it wanted to (you'd have to enable the The rest of the Sails team and I keep an eye on the amount of memory a Sails process uses, and we do our best to keep the memory footprint of Sails as small as possible. So far, I've never heard of anyone experiencing deployment problems relating to memory usage, so I'm curious what issue you're running into. I could see how maybe deploying a Sails app on a server with a very small amount of RAM (e.g. 64MB) could cause issues though-- is that what's going on here? Thanks! |
@mikermcneil It can be easily reproduced by the steps I've described in my 1st post:
TestController:
The results:
Other information:
|
Hi @marspark, these issues can be a real pain to get to the bottom of. We can all empathize. The reality is that we have several large-scale Sails deployments in production at the moment (including the Sails website and Treeline.io) that we monitor pretty closely, and they've been running for a long time without gobbling up memory indefinitely or crashing servers. I don't dispute that you're seeing something happen. Rather, I'm suggesting that either:
In almost every case like this we've seen so far, it eventually comes down to, "ah, it turns out it wasn't really consuming all those resources after all, it just looked like it because of X" where X is most often a misunderstanding of the load testing tool (how to use it, or how to interpret the results). I really wish we could be of more help, believe me I know how annoying this can be! |
@marspark,@sailsbot,@mikermcneil,@sgress454: Hello, I'm a repo bot-- nice to meet you! It has been 30 days since there have been any updates or new comments on this page. If this issue has been resolved, feel free to disregard the rest of this message and simply close the issue if possible. On the other hand, if you are still waiting on a patch, please post a comment to keep the thread alive (with any new information you can provide). If no further activity occurs on this thread within the next 3 days, the issue will automatically be closed. Thanks so much for your help! |
@marspark @sgress454 Hey y'all, for future reference, found a great link on this subject that does a better job explaining it than I did above:
Full article: https://auth0.com/blog/four-types-of-leaks-in-your-javascript-code-and-how-to-get-rid-of-them/ |
@marspark,@sailsbot,@mikermcneil,@sgress454: Hello, I'm a repo bot-- nice to meet you! It has been 30 days since there have been any updates or new comments on this page. If this issue has been resolved, feel free to disregard the rest of this message and simply close the issue if possible. On the other hand, if you are still waiting on a patch, please post a comment to keep the thread alive (with any new information you can provide). If no further activity occurs on this thread within the next 3 days, the issue will automatically be closed. Thanks so much for your help! |
Sails version: v0.12.3
Node version: v4.4.7
NPM version: v2.15.8
Operating system: Ubuntu v14.4
Hi guys,
To be short, I've setup a no frontend sails project with a basic controller and hit it with loadtest, the memory keeps increasing and no sign of stablizing. I've read previous posts and have disabled grunt, session, socket, pubsub and etc.
Setup:
Sails version: v0.12.3
Node version: v4.4.7
NPM version: v2.15.8
Reproduce:
create new sails project
edit .sailsrc:
added controllers/TestController.js:
run load test:
Thank you for your help.
Mars
The text was updated successfully, but these errors were encountered: