-
Notifications
You must be signed in to change notification settings - Fork 236
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reject invalid metrics/mappings early instead of breaking /metrics
#186
Comments
Is there anything logged with |
Thank you for reporting! This is definitely an issue, we should reject this line immediately and not poison the Prometheus client. |
I tried recreating it with a single stat line, didn't happen. However, collectively with all stats being published, it fails randomly. Bottom line, stats made the crash. (Isolating the error causing line from the tcpdump was difficult |
What is the error message returned with the 500? |
Error message : server returned HTTP status 500 Internal Server Error |
However, statsd-exporter docker container seems to continue without exit |
@matthiasr I shall clone the repo and run the exporter on debug mode and collect logs for you if you need further information about the bug. Let me know |
Yes, please do! Any additional information helps.
What version of the exporter are you running? What version is running in
the Docker container? When you check out the repo, what happens with the
latest code?
…On Thu, Feb 28, 2019 at 1:10 PM TJ ***@***.***> wrote:
@matthiasr <https://github.com/matthiasr> I shall clone the repo and run
the exporter on debug mode and collect logs for you if you need further
information about the bug. Let me know
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#186 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAICBiys-UUezGJ6yCe9X_1EgAw-_aLAks5vR9UzgaJpZM4bWsQI>
.
|
Here's the exact scenario; I publish two stats from my nodejs application using tcpdump of the stat lines sent to statsd-exporter are as follows:
Log extract of statsd-exporter:
mapping entry for the above two stats:
Issue occurs when the two types of stats, |
That's not valid in the Prometheus metrics model – for a given metric name ( I would recommend a mapping like
It is unfortunate that we only detect this at scrape time. I am not sure how easy it would be to detect this early though. In any case, we will not be able to support this mapping for these inputs. |
I'm going to rename the issue, and add a high-level description at the top, in case anyone wants to pick this up. |
/metrics
Thanks. And yes it's clearly a violation. Didn't realize until the I debugged the code. (What I have done is not a sensible aggregation. I agree with your point). Also thanks for the suggestion |
We've run into this issue also. In our case it's caused by the metrics generated by the The repro case I've come up with is:
The prom client library already checks to see if there is metric name overlap when the metric is registered: But when generating the output for the metrics endpoint, it calls the We're deploying We would be glad to do the work, but some direction on the preferred way to fix this would be appreciated. An easy fix would be to just duplicate the logic in |
I recently tried deploying binaries built from the master branch and ran into a new issue. If a metric has been emitted with conflicting types ( |
@matthiasr I think this is safe to close out at this point. We haven't seen this since 0.10.x was released. |
When a user submits an invalid metric (such as mapping a counter and a gauge to the same metric name), we do not log anything, but from that point on (until the exporter is restarted)
/metrics
will return a 500. We ought to not accept the second (then invalid) sample at all, and log a descriptive error message instead. This way users do not lose visibility into their other metrics.Original issue:
Statsd exporter results an internal error (Showed in Prometheus) when malformed statsd lines are sent. However, this does not show any error in the log as well.
Steps to recreate:
import {StatsD} from 'hot-shots'; let client = new StatsD(this.initConfig); client.increment('adapter.' + adapter + '.errors'); //note the buggy invocation
On Prometheus targets, exporter goes to down state. No logs will be shown.
The text was updated successfully, but these errors were encountered: