Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory usage #3782

Closed
marspark opened this issue Jul 8, 2016 · 9 comments
Closed

Memory usage #3782

marspark opened this issue Jul 8, 2016 · 9 comments

Comments

@marspark
Copy link

marspark commented Jul 8, 2016

Sails version: v0.12.3
Node version: v4.4.7
NPM version: v2.15.8
Operating system: Ubuntu v14.4


Hi guys,

To be short, I've setup a no frontend sails project with a basic controller and hit it with loadtest, the memory keeps increasing and no sign of stablizing. I've read previous posts and have disabled grunt, session, socket, pubsub and etc.

Setup:
Sails version: v0.12.3
Node version: v4.4.7
NPM version: v2.15.8

Reproduce:
create new sails project

sails new Project --no-frontend

edit .sailsrc:

{
  "generators": {
    "modules": {}
  },

  "hooks": {
    "session": false,
    "sockets": false,
    "pubsub": false,
    "grunt": false,
    "i18n": false
  }
}

added controllers/TestController.js:

module.exports = {
  hi: function (req, res) {
    return res.send("Hi there!");
  }
};

run load test:

var loadtest = require('loadtest');

var host = 'http://localhost:1337';

var options = {
    url: host + '/test/hi',
    requestsPerSecond: 50
};

loadtest.loadTest(options, function(error, result)
{
    if (error)
    {
        return console.error('Got an error: %s', error);
    }
    console.log('Tests run successfully: '+result);
});

Thank you for your help.
Mars

@sailsbot
Copy link

sailsbot commented Jul 8, 2016

Hi @marspark! It looks like you missed a step or two when you created your issue. Please edit your comment (use the pencil icon at the top-right corner of the comment box) and fix the following:

  • Verify "I am experiencing a concrete technical issue (aka a bug) with Sails (ideas and feature proposals should follow the guide for proposing features and enhancements (http://bit.ly/sails-feature-guide), which involves making a pull request). If you're not 100% certain whether it's a bug or not, that's okay--you may continue. The worst that can happen is that the issue will be closed and we'll point you in the right direction."
  • Verify "I am not asking a question about how to use Sails or about whether or not Sails has a certain feature (please refer to the documentation(http://sailsjs.org), or post on http://stackoverflow.com, our Google Group (http://bit.ly/sails-google-group) or our live chat (https://gitter.im/balderdashy/sails)."
  • Verify "I have already searched for related issues, and found none open (if you found a related closed issue, please link to it in your post)."
  • Verify "My issue title is concise, on-topic and polite ("jst.js being removed from layout.ejs on lift" is good; "templates dont work" or "why is sails dumb" are not so good)."
  • Verify "I have tried all the following (if relevant) and my issue remains:"
  • Verify "I can provide steps to reproduce this issue that others can follow."

As soon as those items are rectified, post a new comment (e.g. “Ok, fixed!”) below and we'll take a look. Thanks!

If you feel this message is in error, or you want to debate the merits of my existence (sniffle), please contact [email protected].

@marspark marspark changed the title Memory increase Memory increase non-stop with basic sails application (session & socked turned off) Jul 17, 2016
@mikermcneil
Copy link
Member

mikermcneil commented Aug 9, 2016

@marspark I've spent a lot of time testing v0.12.1, v0.12.2, and v0.12.3 for memory leaks (with the help of @particlebanana, @sgress454, and others), and so far, we have never been able to reproduce a memory leak. Of course it's always possible we're missing something, so I'm curious to see what you're running across. That said, we have tested the exact scenario you're describing, the only difference being that we tested on Mac OS instead of Ubuntu. So please bear with me-- I don't mean to cause any offense, just need to get to the bottom of it as efficiently as possible.

First off, I saw you posted in the other issue: #2779 (comment)
In that issue, I posted a comment with some best practices on how to report a memory leak. So you've probably already seen them, but just to be sure: it's not clear to me from looking at this issue if you are running Sails with recommended production settings-- are you?

If you are, then the next step is to make sure we're on the same page about what a memory leak is:

When you start looking at this kind of stuff, it can sometimes be really difficult to know what is and isn't a leak. (At least I know it used to confuse me quite a bit!) To be clear: a Node process will grow its memory usage until it decides it's time to run the garbage collector. Just because memory is going up continually does not mean there is a memory leak-- it means that the garbage collector has not run yet.

To show what I mean, take a look at the notes I wrote up on how to analyze memory usage here: https://github.com/balderdashy/waterline/issues/1323#issuecomment-214025388

Next, take a look at the graphs @particlebanana posted here: https://github.com/balderdashy/waterline/issues/1323#issuecomment-214554282

For more technical background and notes on the garbage collector in Node.js, you might also check out:
nodejs/node-v0.x-archive#5345

So if, after reading the stuff above, it turns out that we weren't on the same page-- no problem! On the other hand, if you're confident that there's a memory leak here, then please continue on below.

Testing for memory leaks

The only surefire way to diagnose a memory leak is to run your Node process with the --expose-gc flag enabled, to manually run the garbage collector , and then to compare the "valleys" of the chart. Here's a step by step:

1. Expose gc endpoint

Expose a development-only endpoint that, when called, will run the garbage collector. For example: https://github.com/balderdashy/sails-hook-dev/blob/master/index.js#L190-L203
(you can also just install sails-hook-dev in your project).

2. Monitor

Start up a program that monitors your Node process's memory. I recommend NodeSource.

3. Lift

Lift your app with all recommended production settings (see deployment and scaling docs on sailsjs.org). But also make sure you tell Node.js to allow your code programmatic use of the garbage collector. For example, I like to do this by running:

NODE_ENV=production node --expose-gc app.js

4. Do stuff or run load tests (round 1)

Now perform the behavior that you suspect will cause a memory leak. In this example, that would be sending GET requests to /. After about 30 minutes of this, if you take a look at the graphs, they might look slopey and bad, or they might flat and good. But that doesn't matter at all-- we have no idea about whether there are memory leaks at this point.

5. Force garbage collector to run (round 1)

Now hit your endpoint that runs the garbage collector. You'll see the graphs drop dramatically after a moment. Take note of the lowest point in the graph (the "valley"). This is the amount of memory that the process is using, and that could not be reclaimed using the garbage collector. This is the memory where stuff you're actually using lives-- e.g. the require() cache, and local variables that are still in use. If you're reporting a suspected memory leak, please be sure to take a screenshot of the graphs and memory usage in GB at this point.

6. Do stuff or run load tests (round 2)

Now do exactly the same thing we did in step 4 again, for another 30 minutes.

7. Force garbage collector to run (round 2)

Now do exactly the same thing we did in step 5 again. After a moment, if you notice that the "valley" in the graph is significantly higher than it was in step 5, there might be something going on (there seems to be some amount of natural variation in what the garbage collector can actually reclaim-- in some cases this second "valley" is actually lower). If you're reporting a suspected memory leak, please be sure to take a second screenshot of the graphs and memory usage in GB at this point.

Finally, if after running through the steps above, it seems likely that there is a memory leak (i.e. the second "valley" was significantly higher than the first), then please repeat steps 6 and 7 one more time to be sure. If there is a memory leak, you'd expect the second "valley" to be significantly higher than the first, and the third "valley" to be significantly higher than the second. If you notice that this is the case, then please let me know ASAP in this issue.

Thanks for the help!

@marspark
Copy link
Author

@mikermcneil Hi Mike, thank you for your reply.

The problem I'm encountering is not a memory leak, it's just that the garbage collector won't run as long as the sails server stays online.

I was using sails 0.9.7 on my other server and it triggers garbage collector correctly therefore I'm wondering if there's anything I can do to make sails 0.12.3 behaviors the same way.

Thank you!
Mars

@mikermcneil
Copy link
Member

@marspark Ah ok, I think I understand the mixup now. Sails doesn't decide when to run the garbage collector-- your Node process runs the garbage collector whenever it wants. In fact, it couldn't control that even if it wanted to (you'd have to enable the --expose-gc flag for framework/app-level code to be able to call the garbage collector, and the only way I can think of to prevent it from running would be to while(true){}-- which would also completely block all incoming requests. Obviously, you wouldn't want to do that!)

The rest of the Sails team and I keep an eye on the amount of memory a Sails process uses, and we do our best to keep the memory footprint of Sails as small as possible. So far, I've never heard of anyone experiencing deployment problems relating to memory usage, so I'm curious what issue you're running into. I could see how maybe deploying a Sails app on a server with a very small amount of RAM (e.g. 64MB) could cause issues though-- is that what's going on here?

Thanks!

@mikermcneil mikermcneil changed the title Memory increase non-stop with basic sails application (session & socked turned off) Memory usage Aug 11, 2016
@marspark
Copy link
Author

marspark commented Aug 12, 2016

@mikermcneil It can be easily reproduced by the steps I've described in my 1st post:

  1. create a fresh no-frontend project
  2. disable session/socket in .sailsrc
  3. create a basic controller (from the documentation section in the sails website)
  4. hit it with loadtest (or just refresh localhost:1337/test/hi over like 100 times manually)

TestController:

module.exports = {
  hi: function (req, res) {
    return res.send("Hi there!");
  }
};

The results:

  1. Memory never drops as long as the server is up (even after you stopped for a few hours)
  2. It can stack up to multi-GBs and eventually crash the server

Other information:

  1. sails v0.9.7 can trigger garbage collection as expected

@sgress454
Copy link
Member

Hi @marspark, these issues can be a real pain to get to the bottom of. We can all empathize. The reality is that we have several large-scale Sails deployments in production at the moment (including the Sails website and Treeline.io) that we monitor pretty closely, and they've been running for a long time without gobbling up memory indefinitely or crashing servers. I don't dispute that you're seeing something happen. Rather, I'm suggesting that either:

  1. What you think you're seeing as a never-ending increase of memory, isn't. I don't know anything about the loadtest package, it might be awesome. I'm just saying that the tooling around these sorts of metrics is often tricky and misleading, and can sometimes work differently between platforms and Node versions.
  2. If there really is a problem of never-ending memory consumption, it involves something particular to your configuration. It's still possible that something in Sails is contributing to it, but it can't just be Sails, because if all it took to make a Sails app consume all of your server's resources was to hit a "hello there" endpoint 100 times, this forum would be flooded by folks with crashed servers.

In almost every case like this we've seen so far, it eventually comes down to, "ah, it turns out it wasn't really consuming all those resources after all, it just looked like it because of X" where X is most often a misunderstanding of the load testing tool (how to use it, or how to interpret the results).

I really wish we could be of more help, believe me I know how annoying this can be!

@sailsbot
Copy link

@marspark,@sailsbot,@mikermcneil,@sgress454: Hello, I'm a repo bot-- nice to meet you!

It has been 30 days since there have been any updates or new comments on this page. If this issue has been resolved, feel free to disregard the rest of this message and simply close the issue if possible. On the other hand, if you are still waiting on a patch, please post a comment to keep the thread alive (with any new information you can provide).

If no further activity occurs on this thread within the next 3 days, the issue will automatically be closed.

Thanks so much for your help!

@sailsbot sailsbot added the waiting to close This label is deprecated. Please don't use it anymore. label Sep 12, 2016
@mikermcneil
Copy link
Member

@marspark @sgress454 Hey y'all, for future reference, found a great link on this subject that does a better job explaining it than I did above:

image

The key in this case lies in the fact that after each drop in memory use, the size of the heap remains bigger than in the previous drop. In other words, although the garbage collector is succeeding in collecting a lot of memory, some of it is periodically being leaked.

Full article: https://auth0.com/blog/four-types-of-leaks-in-your-javascript-code-and-how-to-get-rid-of-them/

@sailsbot sailsbot removed the waiting to close This label is deprecated. Please don't use it anymore. label Sep 13, 2016
@sailsbot sailsbot added the waiting to close This label is deprecated. Please don't use it anymore. label Oct 14, 2016
@sailsbot
Copy link

@marspark,@sailsbot,@mikermcneil,@sgress454: Hello, I'm a repo bot-- nice to meet you!

It has been 30 days since there have been any updates or new comments on this page. If this issue has been resolved, feel free to disregard the rest of this message and simply close the issue if possible. On the other hand, if you are still waiting on a patch, please post a comment to keep the thread alive (with any new information you can provide).

If no further activity occurs on this thread within the next 3 days, the issue will automatically be closed.

Thanks so much for your help!

@sailsbot sailsbot removed the waiting to close This label is deprecated. Please don't use it anymore. label Oct 18, 2016
@balderdashy balderdashy locked and limited conversation to collaborators Feb 2, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Development

No branches or pull requests

4 participants