-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Fails to discover master and form a cluster #341
Comments
These are the events written during $ kubectl get events --sort-by=.metadata.creationTimestamp
---
3m2s Normal SuccessfulCreate statefulset/elasticsearch-master create Claim elasticsearch-master-elasticsearch-master-0 Pod elasticsearch-master-0 in StatefulSet elasticsearch-master success
3m2s Normal SuccessfulCreate statefulset/elasticsearch-master create Pod elasticsearch-master-0 in StatefulSet elasticsearch-master successful
3m2s Normal NoPods poddisruptionbudget/elasticsearch-master-pdb No matching pods found
3m1s Normal ProvisioningSucceeded persistentvolumeclaim/elasticsearch-master-elasticsearch-master-1 Successfully provisioned volume pvc-2d94baf3-f503-11e9-ad08-0ec4353e481e using kubernetes.io/aws-ebs
3m1s Normal ProvisioningSucceeded persistentvolumeclaim/elasticsearch-master-elasticsearch-master-2 Successfully provisioned volume pvc-2d9b1dc2-f503-11e9-ad08-0ec4353e481e using kubernetes.io/aws-ebs
3m1s Normal SuccessfulCreate statefulset/elasticsearch-master create Pod elasticsearch-master-1 in StatefulSet elasticsearch-master successful
2m47s Warning FailedScheduling pod/elasticsearch-master-0 pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
3m1s Normal SuccessfulCreate statefulset/elasticsearch-master create Claim elasticsearch-master-elasticsearch-master-2 Pod elasticsearch-master-2 in StatefulSet elasticsearch-master success
3m1s Normal SuccessfulCreate statefulset/elasticsearch-master create Pod elasticsearch-master-2 in StatefulSet elasticsearch-master successful
3m1s Normal SuccessfulCreate statefulset/elasticsearch-master create Claim elasticsearch-master-elasticsearch-master-1 Pod elasticsearch-master-1 in StatefulSet elasticsearch-master success
3m1s Warning FailedScheduling pod/elasticsearch-master-1 pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
3m1s Warning FailedScheduling pod/elasticsearch-master-2 pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
2m58s Warning FailedAttachVolume pod/elasticsearch-master-2 AttachVolume.Attach failed for volume "pvc-2d9b1dc2-f503-11e9-ad08-0ec4353e481e" : "Error attaching EBS volume \"vol-0517939ba50d54333\"" to instance "i-0d551a72d7465bf04" since volume is in "creating" state
3m Normal Scheduled pod/elasticsearch-master-1 Successfully assigned default/elasticsearch-master-1 to ip-172-20-54-204.us-west-2.compute.internal
2m58s Warning FailedAttachVolume pod/elasticsearch-master-1 AttachVolume.Attach failed for volume "pvc-2d94baf3-f503-11e9-ad08-0ec4353e481e" : "Error attaching EBS volume \"vol-0f9cde9034c310eeb\"" to instance "i-0f5ef1528c98a14dc" since volume is in "creating" state
3m Normal Scheduled pod/elasticsearch-master-2 Successfully assigned default/elasticsearch-master-2 to ip-172-20-91-45.us-west-2.compute.internal
2m54s Normal SuccessfulAttachVolume pod/elasticsearch-master-2 AttachVolume.Attach succeeded for volume "pvc-2d9b1dc2-f503-11e9-ad08-0ec4353e481e"
2m54s Normal SuccessfulAttachVolume pod/elasticsearch-master-1 AttachVolume.Attach succeeded for volume "pvc-2d94baf3-f503-11e9-ad08-0ec4353e481e"
2m46s Normal ProvisioningSucceeded persistentvolumeclaim/elasticsearch-master-elasticsearch-master-0 Successfully provisioned volume pvc-2d90c274-f503-11e9-ad08-0ec4353e481e using kubernetes.io/aws-ebs
2m45s Normal Scheduled pod/elasticsearch-master-0 Successfully assigned default/elasticsearch-master-0 to ip-172-20-124-210.us-west-2.compute.internal
2m43s Warning FailedAttachVolume pod/elasticsearch-master-0 AttachVolume.Attach failed for volume "pvc-2d90c274-f503-11e9-ad08-0ec4353e481e" : "Error attaching EBS volume \"vol-05ed42b8e9d0aa92a\"" to instance "i-08d5acff318c6059a" since volume is in "creating" state
2m42s Normal Started pod/elasticsearch-master-2 Started container configure-sysctl
2m42s Normal Started pod/elasticsearch-master-1 Started container configure-sysctl
2m42s Normal Created pod/elasticsearch-master-2 Created container configure-sysctl
2m42s Normal Created pod/elasticsearch-master-1 Created container configure-sysctl
2m42s Normal Pulled pod/elasticsearch-master-1 Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:7.3.0" already present on machine
2m42s Normal Created pod/elasticsearch-master-2 Created container elasticsearch
2m42s Normal Pulled pod/elasticsearch-master-2 Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:7.3.0" already present on machine
2m42s Normal Pulled pod/elasticsearch-master-2 Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:7.3.0" already present on machine
2m41s Normal Started pod/elasticsearch-master-2 Started container elasticsearch
2m41s Normal Created pod/elasticsearch-master-1 Created container elasticsearch
2m41s Normal Pulled pod/elasticsearch-master-1 Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:7.3.0" already present on machine
2m41s Normal Started pod/elasticsearch-master-1 Started container elasticsearch
2m39s Normal SuccessfulAttachVolume pod/elasticsearch-master-0 AttachVolume.Attach succeeded for volume "pvc-2d90c274-f503-11e9-ad08-0ec4353e481e"
9s Warning Unhealthy pod/elasticsearch-master-2 Readiness probe failed: Waiting for elasticsearch cluster to become cluster to be ready (request params: "wait_for_status=green&timeout=1s" )
Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )
2m27s Normal Started pod/elasticsearch-master-0 Started container configure-sysctl
2m27s Normal Pulled pod/elasticsearch-master-0 Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:7.3.0" already present on machine
2m27s Normal Created pod/elasticsearch-master-0 Created container configure-sysctl
2m26s Normal Created pod/elasticsearch-master-0 Created container elasticsearch
2m26s Normal Started pod/elasticsearch-master-0 Started container elasticsearch
2m26s Normal Pulled pod/elasticsearch-master-0 Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:7.3.0" already present on machine
2s Warning Unhealthy pod/elasticsearch-master-1 Readiness probe failed: Waiting for elasticsearch cluster to become cluster to be ready (request params: "wait_for_status=green&timeout=1s" )
Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )
2s Warning Unhealthy pod/elasticsearch-master-0 Readiness probe failed: Waiting for elasticsearch cluster to become cluster to be ready (request params: "wait_for_status=green&timeout=1s" )
Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" ) |
Believe it or not, but the problem turned out to be in the recent helm # Uninstall tiller and helm client from server
kubectl delete all -l app=helm -n kube-system
kubectl -n kube-system delete serviceaccount/tiller
kubectl delete clusterrolebinding/tiller
brew uninstall kubernetes-helm
# Install older version of helm and tiller
brew install https://raw.githubusercontent.com/Homebrew/homebrew-core/0a17b8e50963de12e8ab3de22e53fccddbe8a226/Formula/kubernetes-helm.rb
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --wait --service-account=tiller --history-max 200 |
@jmlrt Thank you for confirming. Would be really helpful if this issue was disclosed somehow. Would save us a great chunk of dev time for sure. Maybe a short warning sentence in README? Something that would indicate helm chart a dependency on particular helm version. My apologies, if this is already mentioned somewhere. I am still learning in and outs of helm. |
Well Elastic Helm Charts should be compatible with every Helm v2 release (discloser we certainly won't be compatible with Helm v3 with the current code) in theory as we don't have code specific to some release. However Helm 2.15.0 brought a lot of changes including one breaking change on Chart If you take a look at these issues, you'll see that it wasn't impacted only Elastic charts but many other charts. Overall I can advise to always test Helm version upgrades in a sandbox environment before (this is right for almost every softwares but specifically for Helm). In addition I can add a mention of the Helm version that we currently test in the README. Oh and by the way, meanwhile #338 has been merged and you should now be able to use Helm 2.15.1 with Elastic Charts if your other charts have no issues. |
@jmlrt Understood. Thank you for sharing your thoughts. I think another good practice maybe just to stick with the helm version that is currently used in your |
Chart version:
7.3.0
Kubernetes version:
1.14.6
Kubernetes provider: E.g. GKE (Google Kubernetes Engine)
AWS
Helm Version:
v2.15.0
helm get release
outpute.g.
helm get elasticsearch
(replaceelasticsearch
with the name of your helm release)Describe the bug:
We've been deploying the Elastic search 7.3.0 helm chart to a freshly built kops k8s cluster without issues for many weeks. However, I've tried to rebuild one environment yesterday and it always fails at the Elastic search deployment step now with the following error (see below) and for the sake of god I can't figure out why it fails.
Steps to reproduce:
Expected behavior:
Successful deployment of elastic search helm chart as before.
Provide logs and/or server output (if relevant):
The text was updated successfully, but these errors were encountered: