Don't reuse connections when StreamReader has an exception #339
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
In our production environment, we occasionally experience connection timeouts that "stick" in the aiomysql Pool. Every subsequent connection acquired from the pool will fail on first use.
My production environment full traceback (with private code redacted): https://gist.github.com/TimothyFitz/2d929f907c4cad47a7c8da8e0944b19e
This PR adds a test to simulate our production failure by directly calling
connection_lost
on a connection in the free pool.The pool code already checks for gracefully closed connection via
_reader.at_eof()
. The fix is to also check for_reader.exception()
.I think #132 accidentally conflated two issues, and the fix identified there is needed in addition to pool recycling: #132 (comment)
Tested on:
(production) Ubuntu 16.04.5 LTS, Python 3.5.2, aiomysql 0.0.17 + this patch
(dev laptop) Ubuntu 18.04.1 LTS, Python 3.6.5, this PR (master + patch)