-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Fix Mismatch Between Service Selector and Pod Labels when using Helm Aliases in Kibana #493
Conversation
Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually? |
Just to add I've been using this branch to test and push a version of Kibana which uses this alias and it works well Am I missing anything from the PR? CLA is signed |
jenkins test this please |
I've updated the tests fixing the lint issues - running the lint-python target now returns no issues |
jenkins test this please |
Is the fail a bug in Jenkins? I can't quite see what has failed there, looks as though GKE is complaining about an older version of K8s? |
It seems that GKE has dropped support of 1.13 so we need to drop tests on this version. |
OK pulled in the master branch 👍 |
jenkins test this please |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, Thanks for this PR 👍
${CHART}/tests/*.py
${CHART}/examples/*/test/goss.yaml
When using aliases to deploy Kibana twice in
requirements.yaml
ie:Both instances of Kibana will deploy successfully, however the aliased version will be inaccessible as the selector on the service is configured to use the
{{ .Chart.name }}
value which in the case of the aliased chart is different.This PR allows Kibana to be deployed with an alias in the requirements.yaml
I have added a test which proves this is a noop when deploying in its default state (ie without alias).