-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missing file cluster-info-discovery-kubeconfig.yaml for role kubeadm #11835
Comments
@ledroide Could you provide your inventory and vars? Please follow our issue template to report bugs. that why we disabled creating the blank issue in GitHub UI :) |
@tico88612 : Here below are inventory variables. Hope it helps. kubernetes_audit: true
kube_encrypt_secret_data: true
remove_anonymous_access: true
cilium_version: v1.16.5
cilium_kube_proxy_replacement: false
cilium_cni_exclusive: true
cilium_encryption_enabled: true
cilium_encryption_type: wireguard
cilium_tunnel_mode: vxlan
cilium_enable_bandwidth_manager: true
cilium_enable_hubble: true
cilium_enable_hubble_ui: true
cilium_hubble_install: true
cilium_hubble_tls_generate: true
cilium_enable_hubble_metrics: true
cilium_hubble_metrics:
- dns
- drop
- tcp
- flow
- icmp
- http
cilium_enable_host_firewall: true
cilium_policy_audit_mode: false
kubeconfig_localhost: true
system_reserved: true
kubelet_max_pods: 280
kubelet_systemd_wants_dependencies: ["rpc-statd.service"]
kube_network_node_prefix: 23
kube_network_node_prefix_ipv6: 120
kube_network_plugin: cilium
container_manager: crio
crun_enabled: true
kube_proxy_strict_arp: true
resolvconf_mode: host_resolvconf
upstream_dns_servers: [213.186.33.99]
serial: 2 # how many nodes are upgraded at the same time
unsafe_show_logs: true # when need to debug kubespray output
metrics_server_enabled: true
metrics_server_replicas: 3
metrics_server_limits_cpu: 400m
metrics_server_limits_memory: 600Mi
metrics_server_metric_resolution: 20s
local_path_provisioner_enabled: true
local_path_provisioner_is_default_storageclass: "false"
local_path_provisioner_helper_image_repo: docker.io/library/busybox
ingress_nginx_enabled: true
ingress_nginx_host_network: true
ingress_nginx_class: nginx
csi_snapshot_controller_enabled: true
cert_manager_enabled: true
cephfs_provisioner_enabled: false
argocd_enabled: false
etcd_deployment_type: host
crio_enable_metrics: true
nri_enabled: true
download_container: false
skip_downloads: false |
I have tested on my environment, I think /triage accepted |
@tico88612 : Thanks for the clue - which now looks like an evidence. I have removed the hardening variable I will follow-up #11842 issue, and try again with |
What happened?
summary
Some worker nodes do not create the file
cluster-info-discovery-kubeconfig.yaml
, which is expected further in role kubeadm.Running playbook
cluster.yml
fails at stepCreate kubeadm client config
defined inroles/kubernetes/kubeadm/tasks/main.yml
with this error for 3 worker nodes from a pool of 5:Invalid value: \"/etc/kubernetes/cluster-info-discovery-kubeconfig.yaml\": not a valid HTTPS URL or a file on disk
When checking on hosts, those that succeed have the expected
/etc/kubernetes/cluster-info-discovery-kubeconfig.yaml
. Nodes that fail don't.environment
output for role kubeadm
On k8ststworker-1 - that fails :
$ sudo ls /etc/kubernetes/cluster-info-discovery-kubeconfig.yaml ls: cannot access '/etc/kubernetes/cluster-info-discovery-kubeconfig.yaml': No such file or directory
On k8ststworker-5 - that is ok :
additional info
The text was updated successfully, but these errors were encountered: