-
Notifications
You must be signed in to change notification settings - Fork 404
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Unrecognized version NaN of Express detected; not instrumenting" #44
Comments
It's a real issue! We were relying on there being a version number set on the Express module's exports for Express < 3.0, and apparently that's not a safe assumption! There will be a fix for this in the next beta. Thanks for the report! |
Can you compress newrelic_agent.log and send it to me at [email protected]? This may or may not be an issue in practice -- are you seeing the new Express-based route names that the README talks about for version 0.11? I'm looking at the relevant bits of Express 2.5.11, and the error you're seeing shouldn't be happening, as 2.5.11 has an |
We're running on Heroku, where you can't access the filesystem of running dynos, so we have
Yes, I think so. We're load testing a route that's registered as: app.get '/:id/:rel', resources.listConnections And our second-most common web transaction on rpm.newrelic.com is That said, we do see app.all '*', (req, res, _) ->
# return 404 with a proper JSON error body Screenshot:
No, but our setup does use vhosting with two instances of Express, if that matters. (Related: issue #32 for that leading to double-counting requests.) Our folder structure looks like this:
With api = require '../api/app'
express = require 'express'
www = require './app'
app = express.createServer()
app.use express.vhost settings.API_HOSTNAME, api
app.use express.vhost '*', www
app.listen settings.PORT Does that help? |
Up the log level to
You're not supposed to see that. ;) Figuring out how to present that to customers from the New Relic side is one of our outstanding to-dos before general release. I chose
app.all '*', (req, res, _) ->
# return 404 with a proper JSON error body You wouldn't see
Thanks for reminding me of this; I'll take another look at it now that the request-naming logic has been added. Are |
Log file sent. Lemme know if you didn't get it.
We're load testing our REST API, which does not use a static middleware, no.
We have them running the same version of Express right now, and in practice, we'd keep it that way. The reason for having both installed is partly (a) idealism and partly (b) pragmatism. Idealism: they're written to be independent npm modules, and npm is all about local, isolated dependencies, so shouldn't they have their own instances of Express? We only vhost to work around Heroku's limitation that there can only be one "web" process (i.e. has access to an external port) within a Heroku app/repo. In practice, Express (2.x at least; haven't investigated this area of 3.x) modifies native But that's also why we split them: in case Express creates any state within a That was a long-winded explanation, but hopefully that explains why we do this. =) |
All that said, though, if you do feel that vhosting across Express instances is an unsupported use case, I'd be happy to reconsider and maybe give it a shot to consolidate/share the Express dependency between the two. I'm just not sure I can prioritize that for a week or two, so we'd have to unfortunately live without New Relic for the time being. |
Another data point for the transaction name issue: just load tested a |
I suggest this with the utmost humility, but all three of Nodetime, StrongOps, and New Relic do weird things to your Node process in order to extract metrics. If you're running more than one of us at a time, you may want to reconsider that. 😁 All three modules should get along, but I think it's pretty safe to say that running more than one at the same time is unsupported. There's a lot going on in the logs you sent:
As for the metric naming issues, I'll have to set up a test app using vhost and see what's going on. The function I'm wrapping to get the route names should be pretty robust, but the internals of Express and Connect are pretty complicated, and it'll take some time to parse out what's happening. Given that New Relic for Node is still officially unsupported, I'll have to poke at this in my own time, but I am curious as to what's going on here. |
I can tell you that my preliminary results suggest that if we can solve the double-counting issue, we can solve the metric naming problems. 🎉 |
I think I have a solution for both the double-counting and the weird metric naming issues, but it will require careful testing before I can put it out in a new build, because it makes a fundamental change to how the module turns web requests into New Relic transactions. It still doesn't address the original issue on this ticket, which I haven't been able to reproduce in my dev environment. |
Good to know, but unfortunate! Nodetime and New Relic both have their strengths, and while Nodetime is further ahead right now in some ways, we like to follow along with New Relic progress as we think it's promising. =) Nodefly we just added to try out also. I'd be curious to know how many other people out there use both Nodetime and New Relic...
Yep. The snippet I included above is just wrapped in the If the issue is just readability of logs, I'm happy to turn that down to 1 process next time you need logs. =)
Interesting! Can you clarify what you mean? All three files — cluster, www, and api — are calling Great to hear of the fix you're working on. Thanks for the help! |
I don't know but could find out. Not that many, is the quick answer.
No, it's easy enough to sort through them all (big ups to Bunyan for providing such a great interface for filtering logs), was just curious as to why I was seeing multiple processes in the logs. I'd check with @zeke and see if having I've published v0.11.1 to npm; at the very least this closes #32. Let me know if you're still seeing the issue with Express after this, and I'll see if I can dig in further. I'm not sure the double-counting and the log message are connected. I am curious to see what this does to your transaction volume and naming, though! |
Thanks! Unfortunately, I just tried it, and it throws this error on startup:
This is certainly coming from the On that note, looking at the release notes for that change:
Just to clarify, was this affecting us? Or was this unrelated? If it was us, what are we doing bad that necessitated this? (Using multiple instrumentation libs?)
I am:
And that line comes before the I can get you detailed trace logs again if that helps. Lemme know if you'd like 'em.
Good suggestion to sanity check this, but we do see 4 cores on Heroku FWIW, otherwise our cluster module wouldn't be spinning up workers. Even if that's wrong, we're on 2X dynos, which may make a difference? https://discussion.heroku.com/t/recommended-use-of-nodes-cluster-module/96/3 I've shot off an email to Heroku support (which should reach @zeke) to confirm. Thanks for the suggestion! |
See #46, for which we'll be landing a fix (which we will then publish on npm). If you wanted to file a bug on Connect and / or Express pointing out that
It's not unrelated, but this is on New Relic to fix. That note was targeted at people who are crawling their Express application's middleware stack looking for specific middleware by name. The number of people trying to do crazy metaprogramming shenanigans in Node should be kept to a minimum.
Thanks for the confirmation. I'll keep poking at this. I'm pretty sure that you can ignore the error for now, but I'm going to confirm that nothing is evading the instrumentation hooks before I close the issue. I'll leave a note here when 0.11.2 goes out. Thanks for testing 0.11.1, and sorry about the bug! |
The startup crash is addressed by 8a317f5. I'll keep looking at the startup errors. |
Great, thanks. Upgraded to 0.11.2, and it works again for us. We still see this NaN version error, FYI. I can't tell you yet if the transaction naming or double-counting are improved/fixed, because we were hammering on development pretty hard just before this, so I'll get back to you on that tomorrow or Monday. Noteworthy though, I see an interesting new error that I've never seen before:
This matches the time around which we returned our only 500 response, in a round of testing I just did. I can try to repro this later and send you trace logs again if you'd like. Thanks again for working on all of this! |
I think this is probably (probably!) something to do with the cluster module and can be safely ignored forever, but I still have on my todo list giving this a more thorough investigation and seeing if I can eliminate the logged error if it's spurious. |
Hey, Aseem! This ticket is about a billion years old at this point, but I was wondering if this continued to be an issue for you. We haven't yet gotten around to tackling vhost support in Express (mostly because you're the only person who ran into this issue), but we did nerf those chatty error messages, so thought it was worthwhile checking in with you. Let us know! Thanks! |
I still get notified about this thread. Hi guys. Happy new year. |
Hi, @zeke! This issue is going to take a nap in the warm, soft dirt now. Aseem, feel free to open a new issue if you run into this again. We will get to vhost support one of these days. |
Sorry for the delay @othiym23! This got lost in the busy-ness of day-to-day work. We are planning to upgrade our node-newrelic soon and will let you know! |
Add Prisma and Sequelize sample apps
Updates readme to standard template with some content modifications.
fix(Next.js): Provide tech writing review
We upgraded node-newrelic from v0.9.20 to v0.10.3 and saw this. Just upgraded again to v0.11.0 now and the error remains.
Beautified:
We're running Express 2.5.11 still unfortunately. We'd obv like to upgrade to Express 3, but it's a major upgrade for us and we can't prioritize that just yet.
The interesting thing is that data still seems to be getting sent to rpm.newrelic.com. So is this really a non-issue?
The text was updated successfully, but these errors were encountered: