-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
#233 Added interface for performing resource clean up and updated tests #236
Conversation
Issue #233: Fixed resource clean-up in cached decorator.
Codecov Report
@@ Coverage Diff @@
## master #236 +/- ##
==========================================
+ Coverage 99% 99.03% +0.03%
==========================================
Files 12 12
Lines 705 727 +22
Branches 76 77 +1
==========================================
+ Hits 698 720 +22
Misses 7 7
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your work, looks good! One extra comment, could you try to not decrease the coverage? :). You can check what you are missing with make cov
aiocache/backends/redis.py
Outdated
@@ -121,6 +121,13 @@ def __init__( | |||
kwargs["encoding"] = encoding | |||
return await getattr(_conn, command)(*args, **kwargs) | |||
|
|||
@conn | |||
async def _close(self, *args, _conn=None, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Close doesn't need the _conn
to be propagated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
aiocache/base.py
Outdated
@API.register | ||
@API.aiocache_enabled() | ||
@API.timeout | ||
@API.plugins |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you remove register
, aiocache_enabled
and plugins
decorators? Those decorators are for cache specific commands
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
aiocache/decorators.py
Outdated
@@ -55,7 +55,9 @@ def cached_decorator(func): | |||
|
|||
try: | |||
if await cache_instance.exists(cache_key): | |||
return await cache_instance.get(cache_key) | |||
result = await cache_instance.get(cache_key) | |||
await cache_instance.close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmmm we don't need to block the user until the pool is closed. asyncio.ensure_future
would be sufficient here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, but won't that bring back the warnings? The future created must still be awaited at some point.
I think it would be best to re-use the same connection within the decorator, hence my original proposal. Opening and closing the connection on each function call seems superfluous to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with you on reusing the connection for the decorators, this is something I have in mind because there are a couple of edge cases.
Regardless of reusing or not the connection, its not worth it to delay the response to the user waiting for the close to pool. I prefer having a message saying Task X wasn't executed rather than unneeded delayed responses. We can think how to close those pending connections later :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see where you are coming from, but I don't think this "delay" is as big of a deal as you are making it out to be. Since the code is using asyncio it is not as if the program will be idle whilst closing the connection, the event loop will context switch to another task.
I have made the changes and the error messages have now returned. I would advise either reverting back to how I originally wrote it (awaiting the close), or reusing the same cache instance within a decorator. Leaving these connections and co-routines dangling to spew errors is more of a code smell then either of those options in my opinion. Ultimately though, it is up to you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I'm not talking about this kind of speed but speed for a single call. In order for the function to return, it will have to wait for the pool to close (and the loop to reschedule it) when it really doesn't have to to retrieve the result.
In an API with high load, this can lead to unnecessary delayed calls. Imagine the following case:
- New call arrives to the decorated function F
- Other 1000 async calls arrive (no need to be on the same function, any async will do)
- F gets the loop and executes until arriving to the
await close
. (I know there are more breaks in the middle but for the sake of simplicity lets imagine we are there already). Because of finding theawait
it is rescheduled to the queue of tasks to execute (as the last one because its round robin). - Until all 1000 async tasks aren't processed (i.e. find their first break), the loop won't pick again F and it will pick it just to return something it could have returned already.
Also there can be side effects when trying to close the connection (https://github.com/aio-libs/aioredis/blob/master/aioredis/connection.py#L315) that raise exceptions.
Because all of that I prefer to use ensure_future
. I agree with you that its a code smell and we won't be freeing all the resources but I prefer it that way. I have in mind a way to solve these situations, when I mature the idea a bit more I'll ping you to validate it if that's okay.
I hope this convinces you :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not entirely, I guess we will agree to disagree haha. I have made the changes to use ensure future anyways.
aiocache/decorators.py
Outdated
@@ -67,6 +69,7 @@ def cached_decorator(func): | |||
except Exception: | |||
logger.exception("Unexpected error with %s", cache_instance) | |||
|
|||
await cache_instance.close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same
aiocache/decorators.py
Outdated
@@ -145,6 +148,7 @@ def multi_cached_decorator(func): | |||
except Exception: | |||
logger.exception("Unexpected error with %s", cache_instance) | |||
|
|||
await cache_instance.close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same
examples/cached_alias_config.py
Outdated
@@ -33,6 +33,7 @@ | |||
assert await cache.get("key") == "value" | |||
assert isinstance(cache, SimpleMemoryCache) | |||
assert isinstance(cache.serializer, DefaultSerializer) | |||
await cache.close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not ideal. caches.get('default'
returns a singleton identified by 'default'. User may want to use this cache in another function.
Global aliased caches should be closed when the program exits
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The user can choose to close this cache whenever they want. This is just an example usage, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yup, its just for the sake of "good" example. Its not ideal to close a singleton cache in the middle of a cache :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough, I've moved the closing of the cache into the body of test_alias
All checks now passing |
…tion in decorators
Thanks for your contribution. As said I'll ping you when I have the solution for cleaning up the rest of resources :) |
No description provided.