-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch drain_all
call to be called from new thread instead of event loop.
#16739
Switch drain_all
call to be called from new thread instead of event loop.
#16739
Conversation
CodSpeed Performance ReportMerging #16739 will not alter performanceComparing Summary
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm i think it might be good to add a unit or integration test for this one! i remember some weird shutdown / deadlock here
…th-dependencies-that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still seem to remember some issue about spawning a new thread at interpreter shutdown a la python/cpython#116677 but maybe this has since been fixed and released in a recent version of python 3.12 cc @jlowin (I remember we both ran into this)
anyways if we're passing tests and not seeing any issues, I think this makes sense
src/prefect/logging/handlers.py
Outdated
@@ -118,7 +118,7 @@ def flush(cls) -> None: | |||
|
|||
# Not ideal, but this method is called by the stdlib and cannot return a | |||
# coroutine so we just schedule the drain in the global loop thread and continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comment should probably be updated to match new behavior
# coroutine so we just schedule the drain in the global loop thread and continue | |
# coroutine so we just schedule the drain in a new thread and continue |
…cies-that' of https://github.com/PrefectHQ/prefect into jean/oss-5963-deadlock-when-deloying-flow-with-dependencies-that
…th-dependencies-that
logger = logging.getLogger(__name__) | ||
logger.setLevel("WARNING") | ||
|
||
dictConfig({"version": 1, "incremental": False}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the bug ticket says the bug is triggered with incremental=True
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That was my mistake, it is incremental=False
as that is what makes configure call _clearExistingHandlers
Moves the
drain_all
call to a new thread instead of the event loop.closes: #16115
Checklist
<link to issue>
"mint.json
.