Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resource Cleanup #233

Closed
Quinny opened this issue May 16, 2017 · 9 comments
Closed

Resource Cleanup #233

Quinny opened this issue May 16, 2017 · 9 comments
Milestone

Comments

@Quinny
Copy link
Contributor

Quinny commented May 16, 2017

Maybe I am using it wrong, but when I tag functions with the cached decorator for a redis cache like so:

@cached(ttl=ONE_HOUR, cache=RedisCache, key="rssfeed",
        serializer=PickleSerializer(), port=6379, namespace="main",
        endpoint="192.168.1.19")
async def f(....):
  .....

I get the following errors upon exit:

2017-05-16 11:32:20,999 ERROR asyncio(0) | Task was destroyed but it is pending!
task:  wait_for=()]> cb=[Future.set_result()]>

It seems that the redis connections are not being properly handled on exit. Is there currently a way to deal with this?

I am running a sanic app with python3.6 if that makes a difference.

Thanks! Awesome project by the way.

@argaen
Copy link
Member

argaen commented May 17, 2017

When you say "It seems that the redis connections are not being properly handled on exit" you mean when exiting your program (hitting Ctrl-C or because your code crashes) right?

If that's the case yeah its normal, I haven't exposed any option to clean up resources once the program ends with aioredis (aiomcache doesn't support that). I'll leave this open because its something that's been annoying me for a while but was working other things I considered more prioritary.

Out of curiosity, is it causing any issues to you?

@argaen argaen added this to the 0.6.0 milestone May 17, 2017
@argaen argaen changed the title Decorator Resource Cleanup Resource Cleanup May 17, 2017
@Quinny
Copy link
Contributor Author

Quinny commented May 17, 2017

Yes, I mean hitting Ctrl-C to kill the sanic web server. It doesn't seem to be causing any issues, it just kinda bothers me to see a spew of error messages every time I bring down my server.

One suggestion I have that may help with this: Instead of the cache decorator taking a class, it should take an instance of a cache, and use that instance instead of constructing one. Then, that instance can be properly closed upon program exit. For example:

redis_cache = RedisCache(host=....)

@cached(cache=redis_cache)
async def f(...):
  ....

if __name__ == "__main__":
  ....
  await redis_cache.close()

EDIT: if you are cool with this I can have a go at implementing it.

@argaen
Copy link
Member

argaen commented May 18, 2017

Nope, I don't like your proposal for 2 reasons:

  • Client will always have to initialize the cache object as a global variable. Decorators are evaluated at import time so this means the cache should be initialized also during import time always and thats a code smell for that use case.
  • We would break the contract and I don't think it is worth it for the problem it is solving.

There are three situations I can think off that should be covered:

  • When using aliases as singletons. Those can be closed on program exit as with your proposal. This is possible because you can retrieve the cache instance anywhere in the code.
  • When using explicit cache instances. I.e. inside one class you initialize a new cache. Don't know yet how to solve this one.
  • When using decorators (without aliases). Since the cache is being created and destroyed every time the function is called, it shouldn't be a problem adding once everything is resolved an asyncio.ensure_future(cache.close()) or something like that inside the decorator.

Now, from all above situations, it is obvious that a close function should be implemented to support the functionality so if you want to start with that part feel free, if not I'll give it a go once whenever I have time :)

@Quinny
Copy link
Contributor Author

Quinny commented May 23, 2017

I have implemented the fix in: https://github.com/Quinny/aiocache

Just working on updating the tests so they should be warning/error free

@argaen
Copy link
Member

argaen commented May 23, 2017

Feel free to open the MR whenever you have it ready :)

@Quinny
Copy link
Contributor Author

Quinny commented May 23, 2017

Would you prefer I squash everything into a single commit?

Edit: all tests passing with no warnings or errors:
https://travis-ci.org/Quinny/aiocache/builds/235270010

Let me know if you want a single commit or if i should break it up and then I will MR

@argaen
Copy link
Member

argaen commented May 23, 2017

I do squash commits for the MR usually so no worries

argaen pushed a commit that referenced this issue May 25, 2017
Call `await cache.close()` to close a pool and its connections
@argaen
Copy link
Member

argaen commented May 26, 2017

Status update:

  • close function will free connections from the pools. Calling it explicitly for a cache will avoid exiting the program with allocated resources.
  • [TODO] For decorators, the task of closing the pool is done asynchronously so it could happen the program exits before the task is done. Its an edge case and I consider it minor but to have a way to wait for those tasks to finish is a nice to have.

@argaen argaen modified the milestones: 0.6.0, 0.7.0 Jun 2, 2017
@argaen
Copy link
Member

argaen commented Jun 29, 2017

After aio-libs-abandoned/aioredis-py#239. In the end most of the warnings were noise. With the new aioredis version the amount has been reduced

PS: close is still necessary to release created connections that may be iddle

@argaen argaen closed this as completed Jun 29, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants