-
Notifications
You must be signed in to change notification settings - Fork 343
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Subdaemon heartbeat with modified libqb (async API for connect) #2588
Subdaemon heartbeat with modified libqb (async API for connect) #2588
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach seems reasonable to me
bd94be0
to
f07e7b3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Are you planning to keep the old code if the new libqb API isn't available?
Last push as well solves the issue with a hanging shutdown once there were subdaemons that weren't observed as children of pacemakerd (signal). |
build-issue is some repo-issue with tumbleweed |
Do you know if that hang was a regression in a released version? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a good approach, we just need a fallback for when the libqb API isn't available
pcmk_children[next_child].name, | ||
(long long) PCMK__SPECIAL_PID_AS_0( | ||
pcmk_children[next_child].pid), | ||
(rc == pcmk_rc_ipc_pid_only)? " as IPC server" : ""); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that we don't fall through we don't need this check or the similar one below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oops
No. Never had tested much with pre-existing daemons. |
f07e7b3
to
a84bd9b
Compare
configure.ac
Outdated
@@ -1316,7 +1316,7 @@ AC_CHECK_FUNCS(qb_ipcc_connect_async, | |||
|
|||
dnl libqb 2.0.2+ (2020-10) | |||
AC_CHECK_FUNCS(qb_ipcc_auth_get, | |||
AC_DEFINE(HAVE_IPCC_AUTH_GET, 1, | |||
AC_DEFINE(HAVE_QB_IPCC_AUTH_GET, 1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually (thankfully) this shouldn't be necessary. AC_CHECK_FUNCS() will already define that, so we were just unnecessarily defining the alternate name. We can just drop the second argument (i.e. the AC_DEFINE) altogether.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then let's make it consistent if we are sure it works the same on all platforms/versions we do support.
a84bd9b
to
f62b28f
Compare
Remove superfluous AC_DEFINE - one of them with typo
f62b28f
to
8e8a4a3
Compare
(long long) PCMK__SPECIAL_PID_AS_0( | ||
pcmk_children[next_child].pid), | ||
pcmk_children[next_child].check_count); | ||
stop_child(&pcmk_children[next_child], SIGKILL); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In public clouds, nowadays it happens more often than before that sub-daemons are unresponsive to IPC and get respawned.
As we know, if it's controller that respawns, the node will lose all its transient attributes in the CIB status without being written again. Not only the resources that rely on the attributes will get impacted, but also missing of the internal attribute #feature-set
will result into confusing MIXED-VERSION
condition being shown from interfaces like crm_mon.
So far PCMK_fail_fast=yes
probably is the only workaround to get the situation back into sanity but of course at a cost of node reboot.
While we've been trying to address it with the idea like:
#1699
, I'm not sure if it'd make sense at all to increase the tolerance here such as PCMK_PROCESS_CHECK_RETRIES
or make it configurable... Otherwise should we say that 5 failures in a row are anyway bad enough to trigger a recovery?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sry I may be missing the reason for your comment here.
Previously IPC wasn't checked on a periodic basis for all subdaemons.
Numbers are kind of arbitrary. 1s is kind of a lower limit that makes sense for retries. Failing after 5 retries was the attempt to make it as reactive as before for cases where IPC was checked before already.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nothing is wrong with the changes in this PR. Just for bringing up the topic in the context here :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Coincidentally I recently created https://projects.clusterlabs.org/T950 regarding this code, but it's not related unless you've only seen issues at cluster shutdown.
https://projects.clusterlabs.org/T73 is not directly related either but could affect the timing.
There is a 1s delay between checks of all subdaemons, so if they're all up, that's at least 6s between checks for any one subdaemon. 5 tries (30s) does seem plenty of time, so I wouldn't want to raise that. If a cloud host can't get enough cycles in 30s to respond to a check, it's probably unsuitable as an HA node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the info and opinion. I agree.
Keep pacemakerd tracking subdaemons for liveness - via qb-ipc-connect and the packets exchanged for authentication as of now.
qb-ipc-connect as of current libqb is blocking for an indefinite time if the subdaemon is unresponsive - -SIGSTOP or busy mainloop.
Thus there is an experimental API extension of libqb ClusterLabs/libqb#450 to be able to deal with that without needing ugly workarounds.
This is as well the reason why CI at this point is expected to fail as upstream master of libqb is missing the API extension.