Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rampfit test intermittently failing #769

Closed
WilliamJamieson opened this issue Jul 5, 2023 · 4 comments · Fixed by #771
Closed

Rampfit test intermittently failing #769

WilliamJamieson opened this issue Jul 5, 2023 · 4 comments · Fixed by #771

Comments

@WilliamJamieson
Copy link
Collaborator

The fix in #725 does not quite work all of the time, on the main branch I am getting intermittent failures in the

romancal/ramp_fitting/tests/test_ramp_fit.py::test_one_group_small_buffer_fit_ols

test cases. Iterative running of this test locally (it is skipped in CI) reproduces the error locally for me. For example I get the error:

E           AssertionError: 
E           Not equal to tolerance rtol=1e-06, atol=0
E           
E           Mismatched elements: 1 / 1 (100%)
E           Max absolute difference: 1.06613152e-11
E           Max relative difference: 1.06613152e-06
E            x: array(-1.000001e-05, dtype=float32)
E            y: array(-1.e-05)

I believe the root of this error is:

data = u.Quantity(
(np.random.random(shape) * 0.5).astype(np.float32), u.DN, dtype=np.float32
)
err = u.Quantity(
(np.random.random(shape) * 0.0001).astype(np.float32), u.DN, dtype=np.float32

The intermittent failure is caused by the fact that the data/err change from run-to-run of the tests. Instead, the random number generator should be seeded so that every test is exactly reproducible among different test runs.

@WilliamJamieson
Copy link
Collaborator Author

WilliamJamieson commented Jul 5, 2023

Note that this does not happen very often locally, but I can get it to reappear I think the issue is the very tight tolerances meaning larger fluctuations due to randomness occasionally escape the tolerances (notice how tiny the differences are above).

In any case, the randomness should be seeded as it is just used as a convenient way to generate data.

@schlafly
Copy link
Collaborator

schlafly commented Jul 5, 2023

I stumbled on this issue as well. This test originally tested against the one pixel that was forced to have a value of 10, but in #725 we changed that to point to a different pixel. I think the real fix to 725 is a little different---the change to stcal meant that group_time started getting used, which is -99999 and causes an issue here. I'll put in a fix for that soon, but I agree that seeding makes sense too.

@WilliamJamieson
Copy link
Collaborator Author

Yes, I am working through all of the romancal tests so that it uses the numpy random generators: using an arbitrarily fixed seed.

@WilliamJamieson
Copy link
Collaborator Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants