-
Notifications
You must be signed in to change notification settings - Fork 929
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
failover feature-gate Cannot be closed correctly #5375
Comments
|
So what's causing this problem and can you help answer? It's true that it's noSchedule. but it's still triggering the clearing of the orphaned work. |
the orphan works may be caused by multiple reasons. I cannot find the root cause by those comments. can you paste scheduler logs here? |
�scheduler logs? I found that it seems to remove the resourcebind spec cluster only in the case of taint, which ends up causing findOrphan to be able to be found there. |
since you have disabled the failover feature, but karmada-scheduler might change their schduling result. |
Is this the expected correct behaviour, I've found that in some cases it can lead to an empty list of Cluster's APIEnablements, which ends up with disastrous consequences! |
have u ensured that's your root cause? |
Yes, it seems to me that the shutdown of failover means that no migration of availability zones should take place, but it seems that there are some features here that cause failover-like behaviour to take place nonetheless. |
If we are unlucky, the cluster-status-controller will clear the apiEnablements in the cluster status when the cluster goes offline, then the scheduler will step in and find no matching APIs, which in turn will cause rb's specification cluster to be cleared, and finally the binding controller will removeOrphan, causing the Our downstream resources are removed. The binding controller's removeOrphan causes our downstream resources to be deleted. This is the complete chain, so we still consider the failover implementation to be incomplete. |
first, there's nothing to do with FAILOVER. todo:
|
Hi @kubepopeye, thanks for your response, According to the log information, your analysis is correct, and we noticed this problem and fixed it in v1.12, just as @whitewindmills said, in terms of karmada-controller-manager, we added CompleteAPIEnablements for cluster status. On the scheduler side, we handled the cluster CompleteAPIEnablements Condition. Now this problem should have been fixed, can you help make sure? |
Please provide an in-depth description of the question you have:
I don't want karmada to trigger a failover when the cluster is unreachable, I tried to disable the feature-gate directly in karmada-controller and found that the failover still occurs!
What do you think about this question?:
I went to look at the karmada implementation, the cluster-controller does add a judgement for failover feature-gate in the monitor place, but in the
ttaintClusterByCondition
method, there is a lack of judgement, which leads to the taint being hit, and ultimately leads to feature-gate failover feature-gate.Environment:
The text was updated successfully, but these errors were encountered: