Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Seed PRNG for tests #98

Closed
sebbacon opened this issue May 12, 2020 · 3 comments · Fixed by #101
Closed

Seed PRNG for tests #98

sebbacon opened this issue May 12, 2020 · 3 comments · Fixed by #101

Comments

@sebbacon
Copy link
Contributor

Will help avoid some failures like this one

image

@peteroupc
Copy link

peteroupc commented May 13, 2020

In my opinion, unit tests should not be sensitive to the particular random generator used by the code under test. In this case, setting a seed will tie the unit test to the RNG, making it difficult to change the RNG later. Instead, the unit test could check whether values are acceptable within a given tolerance. By doing so, the unit test will no longer be tied to a particular RNG, so that the RNG can be changed without affecting the test or the application's functionality. See also what I wrote in oscar-system/Oscar.jl#100 (comment).

@sebbacon
Copy link
Contributor Author

Thanks for your feedback! It's a fair point but an "acceptable" tolerance itself can be hard to find in tests within the bounds of determinacy.

In fact this test incorporates the approach already: it's for a random generator that creates events on an exponentially increasing frequency. The test itself aims handle to the indeterminacy of the operation under test by making the date range quite long (over 40 years), counting the tests in each year, and then checking that the most recent years only include counts that increase each year.

This works 95% of the time. But given these failures, what is best fix? Increase the date range, the granularity of the count? By definition the less granular the assertion, the less useful the test.

As we should be indifferent to the RNG implementation, holding the RNG constant in order to test our own implementation seems reasonable.

I'm interested to know your thoughts!

@peteroupc
Copy link

peteroupc commented May 13, 2020

In that case, the test can be repeated a few times more. Then, their results can be averaged, or the test passes if a majority of the runs pass, or the results can be compared against the expected distribution (such as the exponential in this case), etc., whatever works best for a given test. But in all cases, it should be possible to note suspect runs of the test. I admit that I haven't done anything exactly like this in a unit test, but this is similar to adaptive testing of random number generators.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants