-
Notifications
You must be signed in to change notification settings - Fork 263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rendezvous after receiving "exit" notification #159
Conversation
396bc0a
to
62aba26
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
膜拜大佬
@@ -91,6 +92,10 @@ struct Index_Request { | |||
int64_t ts = tick++; | |||
}; | |||
|
|||
std::mutex thread_mtx; | |||
std::condition_variable no_pending_threads; | |||
int pending_threads; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make this atomic as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pending_threads
is protected by thread_mtx
. It doesn't need to be atomic.
manager.Quit(); | ||
|
||
{ std::lock_guard lock(index_request->mutex_); } | ||
indexer_waiter->cv.notify_all(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It’s better to protect the notify with the mutex. If the thread is woken up for any reason, the signal would be lost
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only execution sequence which can lead to lost signal is:
waiter lock
waiter test
signaler change pipeline::quit
to true
signaler notify
waiter unlock
By having a lock_guard between pipeline::quit=true;
and indexer_waiter->cv.notify_all();
, we guarantee that this scenario will not happen.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code is correct. It is a bit obscure, however.
src/sema_manager.cc
Outdated
@@ -432,6 +436,8 @@ void *CompletionMain(void *manager_) { | |||
set_thread_name("comp"); | |||
while (true) { | |||
std::unique_ptr<SemaManager::CompTask> task = manager->comp_tasks.Dequeue(); | |||
if (pipeline::quit) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A full memory barrier is probably not necessary here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ThreadedQueue::{Dequeue,PushBack}
use a mutex internally, the access of pipeline::quit
is guaranteed to be synchronized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes a seq_cst barrier is an overkill. Consider the following:
if(pipeline::quit.load(std::memory_order_relaxed)
...
(or memory_order_acquire
if it wasn't protected by a mutex).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems everything is protected by a mutex. I can change every load of quit
to use std::memory_order_relaxed
@lhmouse