-
Notifications
You must be signed in to change notification settings - Fork 549
Some datanode is killed for unknown reason #1658
Comments
I see the "cannot allocate memory ". Is it due to OOF? |
Seems due to our resource limitation, docker events show it's in oom status
|
@mzmssg , can you explain it in details? whose readiness probe? I am not aware that we have a readiness check in DataNode. will k8s kill a pod when it is not ready? I think k8s will just show that pod is not ready, but not kill it? |
@fanyangCS So the story should be:
Other details for your concern:
|
For the 175 node, it should be cgroup oom, which means the container memory usage exceed the limits. We observe 351 restarts since 11/1, and count the matching 'oom killing' numbers in system log. (The below log shows 353, and it includes previous killing before 11/1.)
|
@fanyangCS @hao1939 Should we also increase the value in the fix. |
Fixed in PR #1689 |
171 node

173 node

175 node

Not sure who killed them. But it might result in hdfs access failure.
The text was updated successfully, but these errors were encountered: