-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom slicing of tests #7672
Comments
Are you actually trying to run the same number of tests on each machine, or just to distribute the execution time as uniformly as possible (i.e. avoiding one machine idling for a long time in the end)?
Because Jest will print the tests in descending order of expected duration, this should achieve a somewhat uniform distribution. |
I need to distribute work uniformly amongst multiple test servers.
Why would it not be an ideal solution? Is there a better way to do this?
This would only print out the test files. I want to have more granular control over the test cases themselves.
On the CI server im guessing Jest would not have any historical data. |
None that I could think of right now, sorry.
Unless you tell it to preserve the Without knowing your setup in great detail, maybe try to preserve the cache and see if round robin distribution of test files is good enough? |
Duplicate of #6270 |
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
I posted the question on Stackoverflow here — https://stackoverflow.com/questions/54122568/custom-slicing-of-tests-in-jest
Since no one has responded I thought I will repost the question here —
We recently migrated all our tests to Jest and have seen a significant improvement in performance. Because we had too many tests it would still take a lot of time to run them on one server.
We decided to run tests on multiple servers in parallel by writing a custom sed command and slicing the test files based on the machine the jest command is running on.
The problem is that all test files don't have equal distribution of tests. Some files may have more tests than others.
We want to now slice tests based on some custom logic so that each machine runs the same number of tests.
Is there a way I can provide a filter function that can decide if the test needs to be executed on that machine or not?
The text was updated successfully, but these errors were encountered: