-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes to jump detection #72
Conversation
Codecov Report
@@ Coverage Diff @@
## main #72 +/- ##
===========================================
- Coverage 93.25% 73.02% -20.23%
===========================================
Files 16 16
Lines 1838 1809 -29
===========================================
- Hits 1714 1321 -393
- Misses 124 488 +364
Continue to review full report at Codecov.
|
ae5726a
to
88841ae
Compare
Do we know why the 1 unit test (in "CI / jwst tests with coverage") is still failing? Is that a legitimate different result? |
The one unit test that is still not passing is "test_nirspec_saturated_pix", which is supposed to verify that groups flagged as saturated don't get CR flags added to them (i.e. no outputs with DQ of 2+4=6). That part is working fine. The one difference is for pixel [2,2], which has SCI values of: model.data[0, :, 2, 2] = [8.25666812e+05, -1.10471914e+05, 1.95755371e+02, 1.83118457e+03, There's a large negative jump in group 2, which apparently used to get flagged as a CR by the old routine, but now it isn't anymore. This could be legit, in that it's no longer above the rejection threshold, given the fixes that've been made. My only concern is that it might also be indicating that something in the new routines is no longer detecting any negative jumps. Of course the other thing I notice about that test is that two of the pixels - [2,2] and [4,4] - have all groups except the first 2 flagged as saturated, hence there's only 2 good groups to work with, resulting in 1 difference to work with, so how could even detect a jump using only 1 difference? Apparently the old routine could? But that doesn't seem right. |
29a15f3
to
7e5f21c
Compare
In this test, a test dataset is constructed and four pixels - ((1, 1), (2, 2), (3, 3), 4,4)) are given test scenarios to see if flagging for saturated pixels is done correctly. The tests for pixels (1, 1) and (3, 3) did not have different results - while the internal calculations changed based on changes to the code, the resulting pixels flagged as jumps ended up being the same. for the test case of pixel (2, 2), 5 of the 7 groups in the test data are set to be saturated, meaning there is only one non saturated diff so no jumps should be flagged. previously, it was incorrectly flagging the 3rd group as a jump when NO groups should have been flagged, so the truth value of this test has changed. I kept this test but changed the truth to assert that NO groups are flagged in this case. pixel(4, 4) was the same case as (2, 2), where 5 out of 7 groups were saturated so no jumps should be flagged but they were being flagged. I removed this redundant example. |
13568d7
to
ece95c2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After a ridiculous amount of reviewing and testing, I hereby approve this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM?
This PR takes the place of #71 with a more final version on a new branch. I will update this description later.
Several fixes to jump detection including: