-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubelet name mismatch when using NodeAuthorization #7172
Comments
FYI, I have made a local build of kops to validate this fix -- If I change the hostname evaluation to create the "private DNS name" the kubelet will start. |
I have gone ahead and created 2 PRs (1) #7185 -- this replaces the Both aren't needed -- but I imagine #7184 is the one we'll want to go with? |
I have updated both PRs to get the privateDNSName from the AWS api-- after talking with their support this seems to be the only reliable way to get the private DNS name. |
Personally I'm in favor of replacing the
|
If the cluster's VPC includes DHCP options the local-hostname includes the DHCP zone instead of the private DNS name from AWS (which is what k8s uses regardless of flags). This patch simply makes the hostnameOverride implementation match by using the AWS api to get the private DNS name Related to kubernetes#7172
If the cluster's VPC includes DHCP options the local-hostname includes the DHCP zone instead of the private DNS name from AWS (which is what k8s uses regardless of flags). This patch simply makes the hostnameOverride implementation match by using the AWS api to get the private DNS name Related to kubernetes#7172
If the cluster's VPC includes DHCP options the local-hostname includes the DHCP zone instead of the private DNS name from AWS (which is what k8s uses regardless of flags). This patch simply makes the hostnameOverride implementation match by using the AWS api to get the private DNS name Related to kubernetes#7172
If the cluster's VPC includes DHCP options the local-hostname includes the DHCP zone instead of the private DNS name from AWS (which is what k8s uses regardless of flags). This patch simply makes the hostnameOverride implementation match by using the AWS api to get the private DNS name Related to kubernetes#7172
If the cluster's VPC includes DHCP options the local-hostname includes the DHCP zone instead of the private DNS name from AWS (which is what k8s uses regardless of flags). This patch simply makes the hostnameOverride implementation match by using the AWS api to get the private DNS name Related to kubernetes#7172
If the cluster's VPC includes DHCP options the local-hostname includes the DHCP zone instead of the private DNS name from AWS (which is what k8s uses regardless of flags). This patch simply makes the hostnameOverride implementation match by using the AWS api to get the private DNS name Related to kubernetes#7172
If the cluster's VPC includes DHCP options the local-hostname includes the DHCP zone instead of the private DNS name from AWS (which is what k8s uses regardless of flags). This patch simply makes the hostnameOverride implementation match by using the AWS api to get the private DNS name Related to kubernetes#7172
1. What
kops
version are you running? The commandkops version
, will displaythis information.
2. What Kubernetes version are you running?
kubectl version
will print theversion if a cluster is running or provide the Kubernetes version specified as
a
kops
flag.3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
I enabled
NodeAuthorization
on an existing cluster (as described in this post). One thing to note is that we have a domain configured in our DHCP options on the VPC (as such the node gets its domain name from DHCP). After doing so I terminated a single node in the cluster to cycle it to the new configuration, and after doing so I see errors in the logs such as:5. What happened after the commands executed?
At this point the node will continually fail to join the cluster.
6. What did you expect to happen?
I expected the node to be able to join the cluster :)
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest.You may want to remove your cluster name and other sensitive information.
8. Please run the commands with most verbose logging by adding the
-v 10
flag.Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know?
Upon further investigation I found that kops is doing the NodeAuthorization checks using the name it determines and the kubelet is trying to connect with the name it determines -- unfortunately these don't always match. looking at the k8s cloudprovider docs (https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#node-name) it clearly states that the kubelet will use the
private DNS name of the AWS instance as the name of the Kubernetes Node object.
. When we look at kops we can see that the name is actually the local-hostname. In most cases these 2 match, but if you use private dns (using the DHCP options on the VPC) then these names don't match -- and in this situation it is now impossible to make them work.With that, I believe the "correct" solution is to add a mechanism to kops that determines the name the same way that the kubelet does. So to do this I see 2 options (1) change
@aws
to this behavior or (2) add another option such as@aws-privatedns
which follows this new behavior. Thoughts?The text was updated successfully, but these errors were encountered: