-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Collector cannot export metrics telemetry in an ipv6-only environment #10011
Comments
@lpetrazickisupgrade I am curious if the issue is with the collector serving the metrics or the prometheus receiver scrapping. Can you reproduce the issue without a prometheus receiver trying to scrape? |
Most likely though this is a bug from switching to using the OTel Go SDK instead of opencensus. /cc @codeboten |
@TylerHelmuth Thanks for taking a look! I think the OpenTelemetry Collector process is crashing at startup parsing the config. The pod is in a CrashLoopBackOff. It doesn't get far enough in the startup sequence to respond to network requests. I've included the only log message I think the regression may have been introduced by this PR: https://github.com/open-telemetry/opentelemetry-collector/pull/9632/files Because the otlp exporter reuses the grpc client config: https://github.com/open-telemetry/opentelemetry-collector/blame/v0.98.0/exporter/otlpexporter/config.go#L25 |
…10343) #### Description Fixing the bug: the latest version of otel-collector failed to start with ipv6 metrics endpoint service telemetry. This problem began to occur after #9037 with the feature gate flag enabled was merged. This problem is probably an implementation omission because the enabled codepath, which was originally added by #7871, is marked as WIP. You can reproduce the issue with the config and the environment variable (`MY_POD_IP=::1`). ```yaml service: telemetry: logs: encoding: json metrics: address: '[${env:MY_POD_IP}]:8888' ``` #### Link to tracking issue Fixes #10011 --------- Co-authored-by: Tyler Helmuth <[email protected]>
Describe the bug
Steps to reproduce
listen tcp: address dead:beef:dead:beef:dead::beef:8888: too many colons in address
What did you expect to see?
Metrics on port 8888
What did you see instead?
What version did you use?
v0.98.0
What config did you use?
Environment
helm.sh/chart: opentelemetry-collector-0.87.2
Image: opentelemetry-collector-contrib:0.98.0
Kubernetes: v1.29.1-eks-b9c9ed7
Additional context
This is a regression. v0.79.0 did not have this issue
The text was updated successfully, but these errors were encountered: