-
-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance benefits? #419
Comments
Interesting, can you clarify what you're comparing to in each of these cases? A cursory glance over Janus' code which I have't really looked at before shows that there'd be some pretty easy performance gains by splitting Janus up into different queue types depending on the sender/receiver thread types. That'd be harder to both use and maintain so might be a hard sell. |
This library is about sync/async and async/sync communication, there is price to pay for extra synchronization. Use appropriate queue for your case. (Also without benchmarks it is hard to discuss this issue in the first place, not clear what are you measuring.) |
I fully agree that it's very difficult to correctly bridge async <-> sync code, and that Janus attempts to solve this in the context of producers - consumers, with a queue as medium. The producer/consumer pattern is often used for performance reasons, e.g. map-reduce, wsgi, websockets, etc. Janus is presented as a generic solution for bridging async/sync code using the producer/consumer pattern, and provides no specific use-cases, or one of those "what is this project / what this project is not" sections. This makes is easy to think it's a good idea to use Janus in your a performance-sensitive project. But my quick-and-dirty benchmarks showed that, in a typical producer / consumer context, the Janus queue is significantly slower than conventional queues, likely overshadowing the performance that can be gained from employing the producer-consumer pattern altogether. So, considering the amount of users, I believe it is very important that describe in the readme:
|
Library was created exactly for reason stated in read:
I do not think we ever calmed any performance gains, or anything. I agree that docs could be better and we are happy to accept any contributions there. |
I created Culsans, which should be more suitable for performance-sensitive applications. I would be glad if you, @jorenham, could test the performance of my library on the same tests and tell me if it is acceptable to you. My queues also behave as fair: if you replace However, I, and others, would be interested to know exactly what you are measuring, because if you are simply comparing async-aware queues with synchronous queues that block the event loop, such tests are meaningless and irrelevant to this project. But if you are comparing with naive implementations that use event loop methods, and that are very popularized on StackOverflow (which is bad because they have pitfalls), then such a comparison makes sense. |
I believe that I was simply trying to figure out if It was quite a while back, and I wasn't able to find the benchmark code I used back then. But if I remember correctly, the tests were very simple, and used a simple pub/sub pattern:
So as I explained in the issue here, I tested this with different producer-to-consumer ratios, and reported the differences between the different queue implementations. I'm not involved in that project anymore for now, so hopefully you'll be able to replicate my results with this. |
Well, thank you for the information. From the description, it sounds like you were testing mostly non-blocking methods (whether explicitly or implicitly), since there is no active interaction between consumers and producers in this scenario. Janus is really bad in these tests, but Culsans is not, so I can assume I have solved this issue. |
Since such tests actually test the speed of janus & culsans benchmarkimport asyncio
import time
import janus # import culsans as janus
async def main():
queue = janus.Queue()
put = queue.async_q.put_nowait
get = queue.async_q.get_nowait
start = time.monotonic()
for _ in range(100000):
put(42)
get()
print(time.monotonic() - start)
queue.close()
asyncio.run(main()) I took CPython 3.10 to match the year of this issue and ran this benchmark on it. With Janus this test for me runs in almost 15 seconds + slows down the closing of the event loop. With Culsans it runs in 0.15 seconds and the event loop closes without delay. These numbers are real, I only rounded them up to the second non-zero digit. And on PyPy, the difference is 10 times bigger (Culsans is faster than Janus by almost a thousand times). Update: since 4a57895, no-wait tests do not call notification methods, so in those tests, Janus performance became comparable to Culsans performance. But Janus can still perform badly on blocking calls, so this issue is only half solved. |
Hello, I'm a bit confused about whether to stay with Janos or switch to Culsans. I'm building an application that logs network traffic. The log parsing part is single threaded synchronous code and the writing to the database is async. I want to optimise the application to be able to process thousands of log entries/second. So I'm not really sure if a queuing system in Python is even a good idea or if I should switch to an MQ like RabbitMQ. (Testing will show) But before I change the whole queuing system in my application, I want to make sure that I really need it. So this is how the app works: The sync thread writes data to the queue as fast as it can:
The async thread(s) read the data from the queue and write it to the DB as fast as it can. So for this type of use case, does Clusans help with the performance? |
Hello, thank you for your question. Yes, Culsans can indeed improve performance in your case. Since version 1.2.0 the performance of Janus is much improved, but it still creates new tasks to notify threads. Culsans does not create new tasks and inherits aiologic semantics, according to which the shortest path to wake up a thread/task is selected. But the speedup is likely to be small unless you are running PyPy on old hardware (as you can see in the Culsans results at the end of its README, it currently gives only 2x speedup in a single thread test). I also note that Janus supports only one asynchronous thread (event loop). With Culsans you can use multiple threads, but does that make sense outside of a free-threaded mode? If you will not use the extra features of Culsans, it will be fully compatible with Janus - you can switch between them just by swapping imports. Culsans depends on aiologic, which is not currently covered in tests, so you may prefer to stay on Janus as a more reliable option. And regarding the problem you described, yes, queues are handy but seem to be optional. You may consider using |
Thanks for your answer! Well, Elixir isn't really an option for me because I've never used it and learning a new language just for this project seems kind of unnecessary. The main reason why I want to stick with the queue system is that the log messages that are pushed into the queue are not always at the same rate, which means I need to somehow handle burst entries without putting too much load on the database. With the queue system, I can easily scale down the database writes if there are too many log entries in the queue. For example, if there is a big spike in the logs and 5000 log entries are pushed into the queue, I can limit the database writes to read only 500 entries from the queue per second. So the queue just acts as a buffer for spikes. Of course this introduces some latency into the writes, but for my use case it's not a big deal. Anyway, thank you for your help! |
So I've been testing the performance a bit, varying e.g.
I found that, janus queues are ~5x slower in sync->sync, ~9x slower in sync->async, and ~15x slower in async->async. This is pretty much consistent across all parameter sets.
This confirmed my suspicion that the performance gain of parallel computation is often less than the cost of using e.g.
threading.Lock
a lot (the GIL certainly doesn't help either).Right now, I can imagine that many users have incorrect expectations of janus. To avoid this, you could add an example that shows how janus can outperform single-threaded asyncio, by employing multiple threads. Additionally, a caveat about janus' performance would be helpful.
The text was updated successfully, but these errors were encountered: