This repository has been archived by the owner on Feb 22, 2022. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 16.7k
[stable/prometheus] support scraping containers by default #22899
Labels
lifecycle/stale
Denotes an issue or PR has remained open with no activity and has become stale.
Comments
pohly
added a commit
to pohly/pmem-CSI
that referenced
this issue
Jun 22, 2020
The Prometheus integration uses the approach from helm/charts#22899: - HTTP for metrics endpoints - container ports tell Prometheus which containers to scrape and how CSI call counts are the same as in the sidecars. This enables correlating statistics and ensures that also node-local operations are captured; kubelet doesn't seem to be instrumented. TODO: - documentation, including Prometheus+Grafana example - decide whether this should be enabled by default and whether it should be configurable (also in the operator)
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
stale
bot
added
the
lifecycle/stale
Denotes an issue or PR has remained open with no activity and has become stale.
label
Jul 25, 2020
This issue is being automatically closed due to inactivity. |
pohly
added a commit
to pohly/pmem-CSI
that referenced
this issue
Aug 21, 2020
The Prometheus integration uses the approach from helm/charts#22899: - HTTP for metrics endpoints - container ports tell Prometheus which containers to scrape and how CSI call counts are the same as in the sidecars. This enables correlating statistics and ensures that also node-local operations are captured; kubelet doesn't seem to be instrumented. TODO: - documentation, including Prometheus+Grafana example - decide whether this should be enabled by default and whether it should be configurable (also in the operator)
pohly
added a commit
to pohly/pmem-CSI
that referenced
this issue
Aug 26, 2020
The Prometheus integration uses the approach from helm/charts#22899: - HTTP for metrics endpoints - container ports tell Prometheus which containers to scrape and how CSI call counts are the same as in the sidecars. This enables correlating statistics and ensures that also node-local operations are captured; kubelet doesn't seem to be instrumented. Internal communication is instrumented the same way. PMEM usage statistics are recording by querying the device manager each time the metrics data gets scraped. The metrics support is enabled unconditionally in the operator and all pre-generated deployment files and use plain HTTP for the sake of simplicity. This is based on the rationale that the data itself is not sensitive and should always be readily available if desired.
pohly
added a commit
to pohly/pmem-CSI
that referenced
this issue
Aug 26, 2020
The Prometheus integration uses the approach from helm/charts#22899: - HTTP for metrics endpoints - container ports tell Prometheus which containers to scrape and how CSI call counts are the same as in the sidecars. This enables correlating statistics and ensures that also node-local operations are captured; kubelet doesn't seem to be instrumented. Internal communication is instrumented the same way. PMEM usage statistics are recording by querying the device manager each time the metrics data gets scraped. The metrics support is enabled unconditionally in the operator and all pre-generated deployment files and use plain HTTP for the sake of simplicity. This is based on the rationale that the data itself is not sensitive and should always be readily available if desired.
pohly
added a commit
to pohly/pmem-CSI
that referenced
this issue
Aug 26, 2020
The Prometheus integration uses the approach from helm/charts#22899: - HTTP for metrics endpoints - container ports tell Prometheus which containers to scrape and how CSI call counts are the same as in the sidecars. This enables correlating statistics and ensures that also node-local operations are captured; kubelet doesn't seem to be instrumented. Internal communication is instrumented the same way. PMEM usage statistics are recording by querying the device manager each time the metrics data gets scraped. The metrics support is enabled unconditionally in the operator and all pre-generated deployment files and use plain HTTP for the sake of simplicity. This is based on the rationale that the data itself is not sensitive and should always be readily available if desired.
pohly
added a commit
to pohly/pmem-CSI
that referenced
this issue
Aug 27, 2020
The Prometheus integration uses the approach from helm/charts#22899: - HTTP for metrics endpoints - container ports tell Prometheus which containers to scrape and how CSI call counts are the same as in the sidecars. This enables correlating statistics and ensures that also node-local operations are captured; kubelet doesn't seem to be instrumented. Internal communication is instrumented the same way. PMEM usage statistics are recording by querying the device manager each time the metrics data gets scraped. The metrics support is enabled unconditionally in the operator and all pre-generated deployment files and use plain HTTP for the sake of simplicity. This is based on the rationale that the data itself is not sensitive and should always be readily available if desired.
pohly
added a commit
to pohly/pmem-CSI
that referenced
this issue
Aug 27, 2020
The Prometheus integration uses the approach from helm/charts#22899: - HTTP for metrics endpoints - container ports tell Prometheus which containers to scrape and how CSI call counts are the same as in the sidecars. This enables correlating statistics and ensures that also node-local operations are captured; kubelet doesn't seem to be instrumented. Internal communication is instrumented the same way. PMEM usage statistics are recording by querying the device manager each time the metrics data gets scraped. The metrics support is enabled unconditionally in the operator and all pre-generated deployment files and use plain HTTP for the sake of simplicity. This is based on the rationale that the data itself is not sensitive and should always be readily available if desired.
pohly
added a commit
to pohly/pmem-CSI
that referenced
this issue
Sep 10, 2020
The Prometheus integration uses the approach from helm/charts#22899: - HTTP for metrics endpoints - container ports tell Prometheus which containers to scrape and how CSI call counts are the same as in the sidecars. This enables correlating statistics and ensures that also node-local operations are captured; kubelet doesn't seem to be instrumented. Internal communication is instrumented the same way. PMEM usage statistics are recorded by querying the active device manager each time the metrics data gets scraped. The metrics support is enabled unconditionally in the operator and all pre-generated deployment files and use plain HTTP for the sake of simplicity. This is based on the rationale that the data itself is not sensitive and should always be readily available if desired.
pohly
added a commit
to pohly/pmem-CSI
that referenced
this issue
Sep 10, 2020
The Prometheus integration uses the approach from helm/charts#22899: - HTTP for metrics endpoints - container ports tell Prometheus which containers to scrape and how CSI call counts are the same as in the sidecars. This enables correlating statistics and ensures that also node-local operations are captured; kubelet doesn't seem to be instrumented. Internal communication is instrumented the same way. PMEM usage statistics are recorded by querying the active device manager each time the metrics data gets scraped. The metrics support is enabled unconditionally in the operator and all pre-generated deployment files and use plain HTTP for the sake of simplicity. This is based on the rationale that the data itself is not sensitive and should always be readily available if desired.
pohly
added a commit
to pohly/pmem-CSI
that referenced
this issue
Sep 11, 2020
The Prometheus integration uses the approach from helm/charts#22899: - HTTP for metrics endpoints - container ports tell Prometheus which containers to scrape and how CSI call counts are the same as in the sidecars. This enables correlating statistics and ensures that also node-local operations are captured; kubelet doesn't seem to be instrumented. Internal communication is instrumented the same way. PMEM usage statistics are recorded by querying the active device manager each time the metrics data gets scraped. The metrics support is enabled unconditionally in the operator and all pre-generated deployment files and use plain HTTP for the sake of simplicity. This is based on the rationale that the data itself is not sensitive and should always be readily available if desired.
pohly
added a commit
to pohly/pmem-CSI
that referenced
this issue
Sep 18, 2020
The Prometheus integration uses the approach from helm/charts#22899: - HTTP for metrics endpoints - container ports tell Prometheus which containers to scrape and how CSI call counts are the same as in the sidecars. This enables correlating statistics and ensures that also node-local operations are captured; kubelet doesn't seem to be instrumented. Internal communication is instrumented the same way. PMEM usage statistics are recorded by querying the active device manager each time the metrics data gets scraped. The metrics support is enabled unconditionally in the operator and all pre-generated deployment files and use plain HTTP for the sake of simplicity. This is based on the rationale that the data itself is not sensitive and should always be readily available if desired.
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Labels
lifecycle/stale
Denotes an issue or PR has remained open with no activity and has become stale.
Is your feature request related to a problem? Please describe.
As described in prometheus/prometheus#3756, it is sometimes useful to scrape not just one port per pod, but one port per container.
The upstream discussion has been going on for a while and it is not clear how it will be solved. Upstream has disabled the example config for pods (the one with the prometheus.io/scrape annotation), but the Helm chart still has it and documents it as the way to get pods discovered on Kubernetes.
Describe the solution you'd like
A possible solution would be to scrape each container which has a container port with a fixed name and where the pod has suitable annotations. Here's an implementation of that approach:
This solution is limited to using the same path for all containers, but IMHO that is acceptable.
As it stands above, one can add that job with
-f
:I think it would be useful to add it to the default
values.yaml
, including a "slow" variant.Describe alternatives you've considered
I tried the approach of merging metrics data with https://github.com/rebuy-de/exporter-merger, but ran into issues when different containers report the same metrics.
Users could be instructed to reconfigure Prometheus manually instead of changing the defaults, but given that nothing in the job above is specific to an application, that seems sub-optimal.
The text was updated successfully, but these errors were encountered: