-
Notifications
You must be signed in to change notification settings - Fork 828
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add clarification around concurrency and parallelization #314
Comments
I'd love to know more about this story as well. We're working on moving some large projects to Graphene, but need to have good answers before doing so. |
Graphene is completely ready to handle concurrency/parallelization. Under the hood is using Here's the related PR: graphql-python/graphql-core#59 Thanks to using The most important thing is the application logic and Graphene types and resolvers should remain the same independently of the executor used. Available executorsBy default, graphql will use a
How you can use it with Graphene?from graphql.execution.executors.thread import ThreadExecutor
schema = graphene.Schema(query=...)
schema.execute(query, executor=ThreadExecutor()) Python 3.5If you are using Python 3.5 Graphene should be able to resolve your PS: Sorry this is not documented yet in the website. |
Wow! That's much more that I expected! That's really awesome. Thanks for implementing this and sharing it here. I'm going to be toying with the Gevent executer next week and we'll report back. If everything works fine, I'll definitely try to send a PR to put this in the doc. Hopefully, in the meantime, people interested in this topic will find this answer. On a related note, since Gevent uses an event loop, it is probably feasible to implement something similar to Dataloader in the JS world. This is especially useful for the 1+N^2 fan out problem. Have you heard of anyone interested in doing this as well? |
Hi, I want to use I'm using tornado at the moment, it looks like if I use the AsyncioExecutor, it'll run on only one eventloop, so I still need multiple tornado instances? |
Hi @jnak . We're currently going through old issues that appear to have gone stale (ie. not updated in about the last 6 months) to try and clean up the issue tracker. If this is still important to you please comment and we'll re-open this. Thanks! |
Hi @jkimbo , this issue is still important to me and I think it is important to add this to the official documentation. When googling "graphene parallelization" this is the first and pretty much only thing that comes up. Furthermore I'd like to know how to make my code thread safe when using the ThreadExecutor (I'm assuming the context object needs to be thread safe?) |
Hi @syrusakbary, the link for tests does not exist anymore. Do you have an updated link, or is this info about concurrency available in the docs? I don't find it in the docs though |
Hey guys,
I've been wondering what is the concurrency behavior / story of graphene but I don't see anything mentioned in the code base.
A few questions on my mind:
Since Python does not really shine out-of-the-box when it comes to parallelization, I think having clarity would help contributors to build performant integrations with different frameworks. Also it would help users to decide to use Graphene or not.
In my case, I could potentially use the base NodeJS implementation to call over RPCs into my python services, but we are mostly a python shop and I don't want to introduce yet another language (esp Node :p ) unless I really need to.
If concurrency hasn't been given yet much thought but you think it should, especially in the v1 context, I would be very happy to brainstorm here with the broader community :)
Cheers,
J
The text was updated successfully, but these errors were encountered: