-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Metrics of type "counter" with labels of differing values are not shipped to Newrelic #39
Comments
Hi @ranimufid, On Issue 1: Do you know if these counters are changing over time? Counters are stored in New Relic by the variation in the metric between two different runs of the integration. For example, if we get I'm working on confirming if this is the intended behaviour. On issue 2 NaNs and infinities are not sent to New Relic. This integration is using the go-telemetry-sdk, and in the Telemetry SDK specs the following in stated:
|
Thanks for your response @douglascamata! Noted on Issue 2. Issue 1
The problem also is that the metric doesn't show up in newrelic at all. I can't find any instance of
|
Interesting! I cannot reproduce it locally using the exact same configuration as you. Do you mind checking for for ingest errors using the following query: |
I get the following message on executing the query you shared:
In my initial attempt, I ran nri-prometheus as a kubernetes deployment. I've now started it as a docker container that is scraping some remote endpoints. Sadly I still observe the same behaviour 😢 This is my new config:
When attempting to reproduce the issue locally on your end, did you supply your code with the same metrics I shared?
Is there anything that I can provide you which can be of help in troubleshooting this issue? |
I'm trying it locally with exactly these two metrics, giving them the ############################################
# Prometheus exporter for K8s that serves
# metrics from a plain text file
############################################
apiVersion: apps/v1
kind: Deployment
metadata:
name: from-file-prometheus-exporter
labels:
app: from-file-prometheus-exporter
spec:
replicas: 1
selector:
matchLabels:
app: from-file-prometheus-exporter
template:
metadata:
labels:
app: from-file-prometheus-exporter
prometheus.io/scrape: "true"
spec:
containers:
- name: from-file-prometheus-exporter
image: python:alpine3.9
env:
- name: METRICS_FILE_URL
value: "<URL_TO_DOWNLOAD_FILE>" # You can use private gist url.
ports:
- name: metrics
containerPort: 8080
command: ["/bin/sh","-c"]
# The reason of using a URL instead of a config map is that the latest has a limitation of up to 1MB
args: ["wget $METRICS_FILE_URL -O /etc/from-file-prometheus-exporter/metrics; python -m http.server -b 0.0.0.0 -d /etc/from-file-prometheus-exporter/ 8080"]
volumeMounts:
- mountPath: /etc/from-file-prometheus-exporter/
name: metrics-dir
readinessProbe:
httpGet:
path: /
port: metrics
initialDelaySeconds: 10
periodSeconds: 15
volumes:
- name: metrics-dir
emptyDir: {}
My static file looks like this:
And my cluster_name: "zdcamata-pomi"
scrape_duration: "20s"
scrape_timeout: "1m"
verbose: true
scrape_enabled_label: "prometheus.io/scrape"
require_scrape_enabled_label_for_nodes: true
transformations:
- description: "General processing rules"
add_attributes:
- metric_prefix: ""
attributes:
my_extra_attr: "my-value"
rename_attributes:
- metric_prefix: ""
attributes:
container_name: "containerName"
pod_name: "podName"
namespace: "namespaceName"
node: "nodeName"
container: "containerName"
pod: "podName"
deployment: "deploymentName"
ignore_metrics:
- prefixes:
- go_
- http_
- process_ Can you try this, please, and tell me if it works? Also have a look at the logs to see if there is something weird -- it will be in verbose mode. Signing off until tomorrow's working hours in CET. 👋 |
Ah, something else: enabling verbose logs in your current setup and sending them over might help. Remember to redact any information you might not want to share. |
Hey @douglascamata. I set up the static metric exporter like you said and I pointed my nri-prometheus docker container to curl that static endpoint. Here are my observations: Take 1 Take 2 From the provided file, the following are the metrics i'd like to see in Newrelic, but don't get shipped:
|
@ranimufid thanks for the update! I'll have a look at this and get back to you soon. Our latest release, v1.3.0, was exactly the one where we upgraded the Go Telemetry SDK for the NaN/Infinity support. There could be something there 🕵 |
Aaaand I'm back! I double checked our (New Relic's) specs on quantized metric types (histograms and percentiles, mapping to Prometheus histograms and summaries), checked the code and spoke to some colleagues. We are not sending any We are aware that the support for these metric types is underwhelming. Please note that they are in WIP state and will improve in the future. |
thanks for your speedy response @douglascamata! May I please know if you guys have a rough plan as to when you will be incorporating these metrics in nri-prometheus? That aside, are you aware of any alternatives to getting these metrics shipped to Newrelic? |
Unfortunately our roadmap of FY20 (fiscal year 20, starting in April) is not yet decided on, so I don't have even a rough plan to share yet, sorry. 😞 My recommendation to get these counts shipped (right now) very likely isn't practical. It would involve using directly the Go Telemetry SDK to parse the Prometheus metrics at a lower level and send them to New Relic. |
@ranimufid this should be improved by #54, which is in the |
Hey @douglascamata , thanks for getting back to me on this! I've updated my nri-prometheus deployment to use the latest
^ doesn't return any records I am able to however see these metrics now, so it seems we're on the right track somehow
|
Background
I am currently attempting to scrape and push pulsar component prometheus metrics into Newrelic.
nri-prometheus image version:
newrelic/nri-prometheus:1.3.0
config map:
Issue 1 🚨
It seems that the
nri-prometheus
is unable to process metrics with labels of differing values for typecounter
:Does not arrive at Newrelic 👎
Arrives at Newrelic 👍
Issue 2 🚨
We also see tonnes of error messages in the logs for metrics of type
summary
withNaN
valuesExample
And the corresponding metric values:
Help regarding understanding both the above behaviours would be greatly appreciated :)!
The text was updated successfully, but these errors were encountered: