You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are some pods (not daemonset) weren't scheduled to the dedicated nodes successfully according to karpenter logs as below:
"incompatible with nodepool \"gpu\", daemonset overhead={\"cpu\":\"605m\",\"memory\":\"1288Mi\",\"pods\":\"12\"}, did not tolerate nvidia.com/gpu=1:NoSchedule;
incompatible with nodepool \"app\", daemonset overhead={\"cpu\":\"605m\",\"memory\":\"1288Mi\",\"pods\":\"12\"}, incompatible requirements, label \"eks.amazonaws.com/nodegroup\" does not have known values"
As per design we don't want these pods are scheduled to the dedicated nodes by using node taints, pod toleration/nodeSelector, whereas it did actually, the good thing is it's failed.
As of now It don't impact our business, everything looks good due to above failure, but those tons of error message in karpenter logs. We'd like to know how to avoid it happen then clear those error messages.
all infra addons should be running at Infrastructure node group
The one of infra addon, Istio pod should be scheduled to infra node group instead of gpu / app nodepool, but it did actually according to karpenter logs, Istio has toleration and nodeselector as below:
Description
Version:
Karpenter Version: v1.0.6
Kubernetes Version: v1.31
Context:
There are some pods (not daemonset) weren't scheduled to the dedicated nodes successfully according to karpenter logs as below:
As per design we don't want these pods are scheduled to the dedicated nodes by using node taints, pod toleration/nodeSelector, whereas it did actually, the good thing is it's failed.
As of now It don't impact our business, everything looks good due to above failure, but those tons of error message in karpenter logs. We'd like to know how to avoid it happen then clear those error messages.
Other Information:
We have two nodepools, these are gpu and app.
gpu has its own taints as below
app nodepool doesn't have taints, but it has startup_taints as below
We also have two node groups managed by AWS ASG, one is for karpenter, the other is for infrastructure addons.
karpenter node group is a dedicated node, only accept karpenter pods.
Infrastructure node group only accept infrastructure addons, which has taints and label
Taint:
node label:
all infra addons should be running at Infrastructure node group
The one of infra addon, Istio pod should be scheduled to infra node group instead of gpu / app nodepool, but it did actually according to karpenter logs, Istio has toleration and nodeselector as below:
operator: "Exists"
eks.amazonaws.com/nodegroup: ${CLUSTER_NAME}-infra-nodegroup
We spent some time to investigate it, but no luck, still can't find root cause, so have to raise issue here.
We'd like to know how to avoid it happen then clear those error messages if possible.
The text was updated successfully, but these errors were encountered: