-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve the rate of thread injection for blocking due to sync-over-async #52558
Comments
I tried to code this a few years back, and discovered that the task scheduler itself is not a replaceable component. Nevertheless, here's logically what my never-executed code did:
Point being, if you tried to async over sync over async the code added another thread to the pool and threw back to the guy who called .GetResult() if it couldn't so it didn't deadlock. I never had to figure out when to stop adding threads because I couldn't get it running. I had some idea to trace the chains and determine if there were still enough pool threads or not. |
@joshudson, I wonder why you needed to do this. I consider adding more threads to be only a workaround at best for the real problem, and there are fair reasons for running into that problem, and fair reasons for having a reasonable workaround such as this. I'm curious what prompted you to explore this and wonder if there might be a better solution. |
@kouvel: Two reasons. 1) Everybody was doing async over sync over async back then, including MVC Razr pages and FileStream itself. 2) I intended to catch exceptions escaping from the invocations of Task.ContinueWith() and passing them on to the next completion handler in line so that things like the still-extant bugs in FileStream where the IO completion callback can throw wouldn't bring the whole application down. It wouldn't have worked for Task.WaitAll() very well but it would have worked for async/await anything. |
I'm not familiar with this issue, and there are some other aspects of your scenario that I'm not familiar with. @stephentoub? |
I don't know what this is referring to. What bugs? |
@stephentoub : You aren't the guy who has to take the call when the services collapsed rather than recover after everybody using the server tried a memory-intensive process at the same time. However I'd rather not debate this here. I provided some ideas from a path that didn't work out because I have some insight on the problem at hand. |
@joshudson, from your example anyway I believe this fix would be configurable enough to set a configuration to make it work similarly. By default some throttling would be done but it would still add threads much faster than currently. |
@joshudson, to clarify, this issue is specific to |
@kouvel: We tracked down and eliminated every last instance in the years since and won't deliberately write anymore because of the deadlocks. It used to arise due to mutually incompatible interfaces (mainly IEnumerable and IDisposable clashing with HttpClient but also Stream clashing with itself before IAsyncDisposable was a thing). In theory, IAsyncEnumerable can still clash with HttpClient but I doubt the case will arise in practice. "unless the sync part is waiting on a task" was indeed the case I was writing about. |
Ok then it sounds reasonable that a fix for this issue would be sufficient for your scenario (though perhaps with some config parameters to make it work the way you want) |
Task.Wait()
and similarTask
-based synchronous calls from thread pool threads. For now, it would not cover other kinds of blocking.N
additional threads) and progressive delays would be induced before adding each additional thread as the thread count increases (increased delay for each subsequent batch ofM
additional threads)The text was updated successfully, but these errors were encountered: