-
Notifications
You must be signed in to change notification settings - Fork 18
Azure Scale is missing nodes and has ghost nodes #59
Comments
If I now scale up to 4 MANAGERS At least I have 3 now.
Possibly there is a restriction on how many node can be scaled at the same time, though I dont see any mention in the docs. https://docs.docker.com/docker-for-azure/why/ |
For a reliable cluster to exist, the ideal number of managers is 3. When you scale up, can you ssh into the machines and see what the |
Try to scale another manager today, and the new instance didnt join swarm
Only error I could see
|
Actually probably ignore that previous error, I see the same error on instances that have correctly joined the swarm Comparing the last log entries of a failed instance to a successful instance, the failed instance logs just seems to stop after the first ProcessGoalState log entry Failed
Successful
|
Though on another new scale set instance, the logs end at op=ProcessGoalState, message=Incarnation 1, however this instance has joined the swarm
|
@djeeg what about logs from the |
Ahh so there should be a container init-azure, I did wonder. From what I recall the only container in Let me do some more scaling to see if I can confirm that behaviour is happening on these orphan nodes. The first scaled noded I tried today has correctly joined. =/
|
Okay have got it to create these ghost nodes, seems if I scale more than 1 VM instance at a time it happens with better luck.
Then the init logs of the failing VM instance.
|
Expected behavior
Swarm nodes match nodes in VMSS in Azure
Actual behavior
Missing nodes
Ghost nodes
Information
Started with 3 nodes
Scale manager nodes in Azure from 1 to 3
Wait a while, first new manager is detected
Wait a bit longer, something odd is going on here
Wait an hour, swarm still not formed, lots of extra nodes (put manager000002 in drain)
The text was updated successfully, but these errors were encountered: