Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve flacky performanceOfContinuouslyCancellingGroups test #2740

Closed
OlegDokuka opened this issue Jul 13, 2021 · 3 comments · Fixed by #2777
Closed

Improve flacky performanceOfContinuouslyCancellingGroups test #2740

OlegDokuka opened this issue Jul 13, 2021 · 3 comments · Fixed by #2777
Labels
area/doOnDiscard This belongs to the doOnDiscard theme good first issue Ideal for a new contributor, we'll help type/enhancement A general enhancement
Milestone

Comments

@OlegDokuka
Copy link
Contributor

As of now FluxGoupeBy dropping elements silently #2352 so the related tests case may be unstable (https://github.com/reactor/reactor-core/runs/3054668530?check_suite_focus=true#step:5:516).

One suggestion to improve it is to add .doOnDiscard(Long.class, downstream::addAndGet) to improve its stability

@reactorbot reactorbot added the ❓need-triage This issue needs triage, hasn't been looked at by a team member yet label Jul 13, 2021
@OlegDokuka OlegDokuka added area/doOnDiscard This belongs to the doOnDiscard theme type/enhancement A general enhancement and removed ❓need-triage This issue needs triage, hasn't been looked at by a team member yet labels Jul 13, 2021
@Sage-Pierce
Copy link
Contributor

It looks like that test failed due to the magnitude assertion and not the matching between upstream and downstream items. In theory, the upstream and downstream matching shouldn't be flaky since the upstream publishing and take-duration scheduling are all on a single-Worker Scheduler.

Would recommend just tweaking down the assertion magnitude. Something like 39999

@simonbasle
Copy link
Contributor

simonbasle commented Sep 10, 2021

The assertion magnitude has been tweaked down to a low 9999. I guess we could try to bump it up to 30K. I've come across a single run out of 100 today that went down to as low as 31K.

Alternatively, if we're confident the magnitude is still sufficiently asserted and the flakiness removed, we can close this one.

@Sage-Pierce
Copy link
Contributor

I would feel slightly better about a higher magnitude, since the issue this catches (#2730) was exposed when asserting emissions greater than 1000s. Not a hard requirement from me, however.

@simonbasle simonbasle added the good first issue Ideal for a new contributor, we'll help label Sep 15, 2021
@simonbasle simonbasle added this to the 3.4.11 milestone Sep 20, 2021
simonbasle added a commit that referenced this issue Sep 20, 2021
This commit tunes the magnitude configuration in the FluxGroupByTest
performanceOfContinuouslyCancellingGroups test.

Previously it was downsized a bit too low to 10K, but 100K was causing
flakyness. 30K seems to be a good middle ground.

Fixes #2740.
simonbasle added a commit that referenced this issue Sep 24, 2021
This commit tunes the magnitude configuration in the FluxGroupByTest
`performanceOfContinuouslyCancellingGroups` test.

Previously it was downsized a bit too low to 10K, but 100K was causing
flakyness. 30K seems to be a good middle ground.

Fixes #2740.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/doOnDiscard This belongs to the doOnDiscard theme good first issue Ideal for a new contributor, we'll help type/enhancement A general enhancement
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants