-
Notifications
You must be signed in to change notification settings - Fork 520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[collector] Update collector to use pod ip #603
[collector] Update collector to use pod ip #603
Conversation
/cc @povilasv |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not seeing where the MY_POD_IP
environment variable is being set. I'd expect to see a mapping in the pod spec to create an environment variable using the downward API.
@Aneurysm9 the env var is being set here
The chart was already using it for the prometheus config. |
I like the change :) Quesiton - Would this change potentially break users? Like host network mode? Hosts with multiple ips or smth like that? How would user go back to 0.0.0.0 if he wanted to? |
@povilasv probably ya. Networking in k8s is complicated enough that this will probably affect at least some users. I'll add an entry in the UPGRADING.md doc. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, maybe also adding some text to helm's Notes.txt so that folks can see this during upgrade?
LGTM |
I recently ran into an issue that I think is related to this. If the recommendations for open telemetry in kubernetes are
Then how do services talk to the daemonset pods in this configuration? Do you recommend a Service object for that? Usually, these two things together conflict. |
@cep21 if you have an issue please create an issue not comment on merged PR. Typically services talk via local node port. For example:
|
I believe this PR prevents the default configuration from using a This is the error message I got (with a Deployment based install):
I don't think this behavior is wrong, but it was surprising to a novice and bit me today when trying to debug some ingest pipelines and I thought I'd just I think it would be nice if a note about this caveat was added to the excellent Would using a I did end up playing around with 2 options: Option 1: Revert back to to
|
@fernferret thanks for sharing with us. I think using Pod IP is the most appropriate default value, but I like the idea of updating our docs to include instructions on how to configure the chart for local development port-forwarding. |
@TylerHelmuth Awesome, thanks for the reply. If you'd like me to open a PR for the documentation I'm willing to write something up. Do you know if there there any practical performance differences with running the 2 collectors vs just one in a production setup? In my head the answer is "no" since I'd imagine the I'd imagine that once I get comfortable I wouldn't need/want this in production. |
@fernferret feel free to open a PR to improve the docs. |
* Update collector to use pod ip * bump chart * Update UPGRADING.md * Add endpoint note in NOTES.txt * bump chart * fix bump
The collector's security guidelines are being update to not exclude containerized environments from the 0.0.0.0 guidelines. This PR brings the charts into compliance.
Related issue: open-telemetry/opentelemetry-collector#6938