-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix NPE in KafkaSupervisor.checkpointTaskGroup #6206
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems reasonable to me 👍
taskGroupsToVerify.put(taskGroupId, taskGroup); | ||
final TaskData prevTaskGroup = taskGroup.tasks.putIfAbsent(taskId, new TaskData()); | ||
if (prevTaskGroup != null) { | ||
throw new ISE( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be very surprising if this happened? Enough to stop the supervisor run (i.e.: probable bug)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should never happen and even if it occurs, the supervisor would kill the task of corresponding taskId
and respawn the same task. Please check https://github.com/apache/incubator-druid/pull/6206/files/c46d4681c334709caa5cddbd0ce0c67a7d22eaad#diff-6eee87b3aa4eb3a516965fe6e93e25a4L1138.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, thanks.
@Nullable | ||
@Override | ||
public Map<Integer, Long> apply(@Nullable Object input) | ||
// task.status can be null if any runNotice is processed before kafkaSupervisor is stopped gracefully. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this comment right? It sounds like it would probably be the other way around (stop gracefully happens before run notices are processed)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. Fixed.
} | ||
} else { | ||
log.info("Killing task [%s] of unknown status", taskId); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really want to kill the task in this case -- I thought it could only happen for a supervisor that is stopping gracefully? Maybe we should just ignore the task, and log a warning, rather than killing it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the perspective of checkpointing, if I understand this code correctly, the supervisor is checkpointing because one of tasks in a taskGroup has processed all assigned events, so all tasks in the taskGroup can be stopped or killed.
I'm not sure why this code is called when stopping the supervisor though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably we shouldn't checkpoint while stopping the supervisor, and this would be a different issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm maybe it makes sense to checkpoint because the supervisor should wait for tasks to finish their jobs and they should be able to checkpoint in the middle of indexing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that we don't want to stop/kill all tasks in the taskGroup just because one of them has processed all assigned events. It could be a checkpoint for an incremental handoff, and we want all tasks to continue running even after the checkpoint. Am I understanding this right?
In other words, it sounds to me like we want to stop/kill all other tasks if any of them has finished (status = success) but we don't want to stop/kill them if it was an incremental handoff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I forgot to say one more thing. This code is called only when tasks are running more than taskDuration
. I don't know what's the idea behind doing checkpoint per taskDuration, but it expects to stop/kill all running tasks. See https://github.com/apache/incubator-druid/blob/master/extensions-core/kafka-indexing-service/src/main/java/io/druid/indexing/kafka/supervisor/KafkaSupervisor.java#L1433.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I read the code more closely and now I see that the idea is that at taskDuration
, tasks should do a final publish and exit. So that's what finalize
is for. The checkpointTaskGroup function, when finalize is true, will check if any task completed, and if so, stop all its replicas. This makes sense, since there is no point in replicas continuing to run if some task in the group is done. (Because they are all doing the same work.)
With your patch, checkpointTaskGroup, when finalize is true, will now kill any task that has null status. I don't see why this is a good thing. After the taskDuration
is over, we want to trigger a final checkpoint/publish, and then let all tasks in a group keep running until one of them is successful. Killing one with unknown status seems counter-productive to that goal.
Am I wrong -- is there a reason it's a good idea to kill tasks with unknown status in this case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, good point. I thought it makes sense to kill them because the supervisor is currently killing running tasks if they are not allocated to middleManagers yet.
Maybe it makes more sense to keep them because the unknown task status indicates that the supervisor hasn't updated it yet. I'll fix this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, good point. I thought it makes sense to kill them because the supervisor is currently killing running tasks if they are not allocated to middleManagers yet.
I think killing unassigned running tasks does make sense, since if a task hasn't even started running yet, it has no hope of catching up so we should just cancel it. However, if there is some risk that the task actually is running but the supervisor just doesn't know where yet, this killing might be over-eager. If that's the case I think it'd be an issue for a separate PR though.
final String taskId = entry.getKey(); | ||
final TaskData taskData = entry.getValue(); | ||
|
||
Preconditions.checkNotNull(taskData.status, "task[%s] has a null status", taskId); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When will this get thrown and what will happen when it gets thrown? I'm wondering what the user experience is going to be like.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should never happen. NPE would throw if taskData.status
is null and this is just a sanity code to see what's null on the potential NPE. I improved the error message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks.
@@ -1714,6 +1755,8 @@ private void checkCurrentTaskState() throws ExecutionException, InterruptedExcep | |||
continue; | |||
} | |||
|
|||
Preconditions.checkNotNull(taskData.status, "task[%s] has a null status", taskId); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar question here -- when will this get thrown and what will happen when it gets thrown?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here. This should never happen. NPE would throw if taskData.status
is null and this is just a sanity code to see what's null on the potential NPE. I improved the error message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks.
final String taskId = entry.getKey(); | ||
final TaskData taskData = entry.getValue(); | ||
if (taskData.status == null) { | ||
killTask(taskId); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is killing the task the right idea here? If we don't know its status is it safe to leave it alone?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The null taskData.status
means that the supervisor hasn't updated it yet, so its actual status can be anything. I think this should kill tasks if their status are unknown because this method is supposed to stop all tasks in the given taskGroup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, that sounds good.
I checked out the TeamCity failures, they are all fixed in #6236 and are not related to this patch. The Travis failure looked spurious so I retried it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM after Travis. I think we can ignore TeamCity for this one, since the inspections it flagged are not related to this patch, and should be addressed in #6236.
Merged master into this branch to get the fixes from #6236. Let's see how this goes. |
Seeing these, possibly legitimate failures?
|
@jihoonson Got it, could you please merge master into this branch in order to get that? |
Oh wait, it passed anyway. I guess #6207 isn't required. |
* Fix NPE in KafkaSupervisor.checkpointTaskGroup * address comments * address comment
* Fix NPE in KafkaSupervisor.checkpointTaskGroup * address comments * address comment
Hopefully fixes #6021.
TaskData.status
andTaskData.startTime
can be null if the supervisor is stopped gracefully before processing any runNotice which sets them properly.