-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate alternative notion of managed resources than app.kubernetes.io/instance
label
#1482
Comments
app.kubernetes.io/instance
label
For more context. See:
Essentially this issue is for us to find an alternative way to detect what is considered a "managed" resource of an application, which wouldn't involve label injection. The issue that we currently face with the
Since v0.11, Argo CD no longer requires that the resource association to be driven by a label. Previously we depended on a label so that we could perform efficient kubernetes queries to perform resource discovery. Now, however, our live state metadata cache enables us to efficiently discover all resources associated with an application, without the use of a label. We simply need to inject the application name somewhere in the resource object. This could be a:
In summary, our injection of My ideal solution would be to inject an ownerReference to all managed resources that Argo CD deploys. However, I do not know if this is valid thing to do when the "owner" is a resource in an entirely different cluster. I posed this question to sig-api-machinery, and still waiting a response. |
Note we might wish to use |
@jessesuen did you get a response to your query in sig-api-machinery |
Document work-around. |
Hi, |
argocd uses app.kubernetes.io/instance to keep track of each app's resources. But this can conflict with the values that helm uses. In this case, argo set the instance to "nginx-ingress" which caused the ServiceMonitor labels not to match, leading to the issues with scraping metrics. See argoproj/argo-cd#1482
No description provided.
The text was updated successfully, but these errors were encountered: