Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1.x: make just() support backpressure #3496

Closed
wants to merge 2 commits into from

Conversation

akarnokd
Copy link
Member

@akarnokd akarnokd commented Nov 5, 2015

One does not simply add backpressure support to just().

Fixes

The reason for this is the bugs hidden by the lack of backpressure support of just(): the overwriting of a previous Producer by timeout, zip and subscribeOn. I've fixed up timeout with a proper ProducerArbiter, had to apply the bugfix from #3493 to zip (may conflict) and had to rewrite subscribeOn from scratch and have it an OnSubscribe. This change required that Single.subscribeOn to be rewritten as well.

Benchmark

Let's see the benchmark comparison (i7 4790, Windows 7 x64, Java 8u66):

image

There are two ways to implement backpressure: with strong atomics or with plain field accesses. The latter tries to exploit the high chance that there won't be concurrent calls to request() ever and thus saves on the atomics. As far as I can tell, there is nothing in RxJava 1.x or 2.x that would violate this assumption. However, I added an escape hatch in case of rogue requesters: set the rx.just.strong-mode system parameter to "true" and just will run with strong atomics.

As seen in the table, the weak version is just slightly better (+3-+10%) in some cases and slightly worse (up to -3%) in other cases. Note, however, the original cases have 2x-5x less overhead.

Maybe the most revealing are the simple, simpleEscape and simpleEscapeAll comparison between and within version. What's seen there is that with the original version, the JIT converted the test into a pure stack-allocation and thus saving on overhead in the simple case. As the other tests add escapes, it forces the JIT to do regular allocations. Interesting that with this PR, the escape doesn't really matter: this is due to how Subscriber.setProducer makes the JIT believe the producer escapes.

In the simpleEscapeAll (which should be the most restrictive for JIT), the overhead is still 2 - 2.3 times bigger: this is due to the extra allocation of a Producer instance when subscribing.

If one remembers my recent blog post, it can be seen that RxJava 2.x does quite well, about 30 Mops/s in the range-1 test (which is equivalent to simple).

Where does the overhead come from? SubscriptionList. In 1.x, the Subscriber creates a SubscriptionList whether or not it is ever required. (I've tried my best several times to defer the creation of this list to no success: the performance improved for some cases while worsened for others, see #3479.)

The strong/weak optimization is not applied to scalarScheduleOn. I haven't benchmarked it but I guess the scheduling overhead overshadows it anyways.

Conclusion

I believe the correctness of just is more important than its performance, but the increased overhead bothers me nonetheless. Given the architecture of 2.x, I'll look into ways to get rid of the mandatory SubscriptionList allocation without breaking public API classes such as Subscriber.

@akarnokd
Copy link
Member Author

Rebased.

@stealthcode
Copy link

Interesting. Thanks! 👍

@stealthcode
Copy link

This should probably be checked out by @benjchristensen before merging

@akarnokd
Copy link
Member Author

Just to check, you are fine with the overhead, right?

@stealthcode
Copy link

@akarnokd yes. I think consistent behavior is worth the hit.

@davidmoten
Copy link
Collaborator

👍

@davidmoten
Copy link
Collaborator

Any plans to merge this one?

@akarnokd
Copy link
Member Author

We are waiting for Ben but I have to fix the merge conflict as well now I see.

@akarnokd
Copy link
Member Author

Sorry, I'll redo this entire PR, everything is messed up.

@akarnokd
Copy link
Member Author

Replaced by #3614

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants