Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix unit test "caches image resources at the document/page level as expected (issue 11878)" #18403

Closed
timvandermeij opened this issue Jul 6, 2024 · 3 comments · Fixed by #18404

Comments

@timvandermeij
Copy link
Contributor

This unit test failed most recently in http://54.193.163.58:8877/0936bcbf5dcdb06/output.txt on Windows, but has failed a couple of times before on Linux too. The log line is:

TEST-UNEXPECTED-FAIL | caches image resources at the document/page level as expected (issue 11878) | in firefox | Expected 893 to be less than 832.5. 

We should investigate why this happens sometimes because as far as I have seen this is the only intermittent we have left in the unit tests.

@Snuffleupagus
Copy link
Collaborator

This unit-test was already tweaked once in PR #17663, maybe we should try changing the divisor to 3 instead?
Obviously comparing parsing/rendering times like this isn't great, but I don't know how we'd test this otherwise.

@timvandermeij
Copy link
Contributor Author

Yes, I also don't have a better suggestion than that. I found that in

pdf.js/test/unit/api_spec.js

Lines 4210 to 4212 in bb3e316

// Ensure that the images were copied in the main-thread, rather
// than being re-parsed in the worker-thread (which is slower).
expect(statsOverall).toBeLessThan(firstStatsOverall / 2);
we use a divisor of 2 and I don't recall having seen that one fail, so I have made the PR above to go use that divisor for this test too for consistency.

@Snuffleupagus
Copy link
Collaborator

Snuffleupagus commented Jul 6, 2024

There's one more thing that we could try, if these tweaks don't help, however it'd increase the runtime of these tests quite a bit and should probably only be a last resort.
Basically we could repeat the contents of these test-cases e.g. five times, and keep the stats for all runs. Then we'd discard the lowest/highest values and compute the average of the remaining ones and finally compare those averages.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants