-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dependency on provisioner node #436
Comments
Based on Tendrl/tendrl-ansible#27, tendrl-ansible no longer set I would say that this imply that moving @mkudlej Btw does Tendrl tells you in the UI which machine is the provisioner ? |
@mbukatov There is no info that machine is provisioner or not, so I expect that Tendrl should deal with provisioner failures. |
This is in progress. Also, please file issue on UI to see "provisioner" tag in node list in the UI |
This is done, please verify the scenario |
Checking with:
Based on this gist, I have identified provisioner node:
And then shut it down. While the ui reports it as down: the machine still has the provisioner tag:
That said, when I loop over all nodes, I see that one new machine has been assigned the provisioner role, so that there are 2 with this role now (one shutdown, one running):
Now when I start the gl2 machine (the original provisioner) again, I see:
So that now we have 2 running provisioner nodes. Wouldn't that be a problem? |
Please verify now, I have made "tendrl/monitor" to take responsibility for re-claiming any old provisioner tags |
On a new cluster instance, I see that when I import the cluster, there are 2 provisioners already (and I haven't powered down any node yet):
I did a mistake and haven't checked for this scenario during #436 (comment) so I'm not sure if this is a new behavior or not. Is this expected behavior? I'm using:
|
After checking status as described in #436 (comment), I powered down 2 machines with provisioner tag. Next morning, I rechecked which nodes are tagged as provisioner again (so that Tendrl had few hours to adapt):
Translating fqdn for a new provisioner:
To sum it up: 2 nodes I turned off no longer have the provisioner tag, and another node, which is still running, was labeled as provisioner instead. Now I have only single provisioner node, which is expected behavior. The remaining question is, why did I start with 2 provisioner nodes as shown in #436 (comment) ? |
Now gluster volumes monitoring depends on provisioner node. What should user do for eliminate possibility of failure of provisioner node? Should user set up another provisioner node?
@Tendrl/tendrl-core @mbukatov
The text was updated successfully, but these errors were encountered: