Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fabric8 ConfigMap vs Secrets priority/order #1338

Closed
tomjankes opened this issue May 9, 2023 · 4 comments · Fixed by #1340
Closed

Fabric8 ConfigMap vs Secrets priority/order #1338

tomjankes opened this issue May 9, 2023 · 4 comments · Fixed by #1340
Labels
Milestone

Comments

@tomjankes
Copy link

We used the same property name in configmap and secrets when using spring-cloud-kubernetes-fabric8-config.
The priority currently seems to be that property in secrets overrides what is stored in configmap.

Looking at the code there seems to be notion of ordering PropertySourceLocators

  • Fabric8ConfigMapPropertySourceLocator is annotated @Order(0)
  • Fabric8SecretsPropertySourceLocator is annotated @Order(1)
    and AnnotationAwareOrderComparator is used to sort those locators.

But this seems to no be the case when spring.cloud.kubernetes.config.fail-fast is enabled and retryable versions of those beans are in use:

  • RetryableFabric8ConfigMapPropertySourceLocator is annotated @Order(1)
  • RetryableFabric8SecretsPropertySourceLocator is not annotated (so I assume the annotation from parent @Order(1) is in use)
    I seems that order for ConfigMap and Secrets is not defined in case of using fail-fast/retries?

Can we rely on the order in spring-cloud-kubernetes-fabric8-config? Or we should avoid overriding config-map properties in secrets?

Describe the solution you'd like
A clear information on whether ConfigMap or Secrets takes priority should be added to documentation (alternatively notion that priority is not defined and it should not be relied upon). Or alternatively just answer in this issue would be very helpful.

@wind57
Copy link
Contributor

wind57 commented May 9, 2023

it seems awkward to me, to store the same thing in a configmap and a secret, to be honest. If you abstract yourself away from spring-cloud-kubernetes, would you do it in general? What's your use case for such a decision? May be there is one indeed and I miss it...

We can definitely align retryable implementations also, but this is not documented at the moment. I guess no one thought of such a possibility.

If we decide to align, we need to:

  • align fabric8 retryable and non-retryable implementations
  • align k8s native retryable and non-retryable implementations
  • document the order
  • write some tests to cover this
  • write two integration tests to cover this, one for fabric8 and one for k8s

I'll wait for Ryan to chime in, if he is OK with such changes or finds them necessary.

@tomjankes
Copy link
Author

Thank you for your answer. I cannot give specific use case - it just something that we encountered in production as development teams can manage their own config maps and secrets.

If it can be made consistent so enabling retries will not change that behaviour It would be great. If the behaviour is not specified and cannot be relied upon I'll take that answer and we will take measures to prevent such setups in future.

@wind57
Copy link
Contributor

wind57 commented May 10, 2023

@ryanjbaxter tagging you for visibility here and your opinion on it. Thank you.

@ryanjbaxter
Copy link
Contributor

@wind57 I agree with what you outlined, we should be consistent

@ryanjbaxter ryanjbaxter linked a pull request May 11, 2023 that will close this issue
@ryanjbaxter ryanjbaxter added this to the 2.1.6 milestone May 11, 2023
@github-project-automation github-project-automation bot moved this to Done in 2022.0.3 May 11, 2023
@github-project-automation github-project-automation bot moved this to Done in 2021.0.8 May 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
No open projects
Status: Done
Status: Done
Development

Successfully merging a pull request may close this issue.

3 participants