Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changes to jump detection #72

Merged
merged 1 commit into from
Jan 7, 2022

Conversation

cshanahan1
Copy link
Collaborator

This PR takes the place of #71 with a more final version on a new branch. I will update this description later.

Several fixes to jump detection including:

  1. fixes to the computation of the median that is used to compute jump thresholds
  2. fixing an error that was causing the largest difference ratio to be missed when comparing to the threshold to classify jumps, resulting in many jumps being missed
  3. restructuring code and adding comments that describe each step
  4. fixes an issue with MIRI 3 & 4 group integrations (link PR / issue here)

@hbushouse hbushouse added this to the 0.6 milestone Jan 6, 2022
@hbushouse hbushouse added the jump label Jan 6, 2022
@codecov
Copy link

codecov bot commented Jan 6, 2022

Codecov Report

Merging #72 (13568d7) into main (12e542b) will decrease coverage by 20.22%.
The diff coverage is 95.45%.

❗ Current head 13568d7 differs from pull request most recent head ece95c2. Consider uploading reports for the commit ece95c2 to get more accurate results
Impacted file tree graph

@@             Coverage Diff             @@
##             main      #72       +/-   ##
===========================================
- Coverage   93.25%   73.02%   -20.23%     
===========================================
  Files          16       16               
  Lines        1838     1809       -29     
===========================================
- Hits         1714     1321      -393     
- Misses        124      488      +364     
Impacted Files Coverage Δ
src/stcal/jump/twopoint_difference.py 84.94% <95.45%> (-15.06%) ⬇️
src/stcal/dynamicdq.py 27.27% <0.00%> (-72.73%) ⬇️
src/stcal/jump/constants.py 28.57% <0.00%> (-71.43%) ⬇️
src/stcal/jump/jump.py 15.00% <0.00%> (-53.34%) ⬇️
src/stcal/ramp_fitting/ramp_fit.py 51.61% <0.00%> (-41.94%) ⬇️
src/stcal/ramp_fitting/ols_fit.py 71.02% <0.00%> (-22.74%) ⬇️
src/stcal/ramp_fitting/utils.py 78.64% <0.00%> (-19.67%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 12e542b...ece95c2. Read the comment docs.

@cshanahan1 cshanahan1 force-pushed the clare_jump_refactor branch from ae5726a to 88841ae Compare January 6, 2022 21:02
@hbushouse
Copy link
Collaborator

Do we know why the 1 unit test (in "CI / jwst tests with coverage") is still failing? Is that a legitimate different result?

@hbushouse
Copy link
Collaborator

hbushouse commented Jan 7, 2022

The one unit test that is still not passing is "test_nirspec_saturated_pix", which is supposed to verify that groups flagged as saturated don't get CR flags added to them (i.e. no outputs with DQ of 2+4=6). That part is working fine. The one difference is for pixel [2,2], which has SCI values of:

model.data[0, :, 2, 2] = [8.25666812e+05, -1.10471914e+05, 1.95755371e+02, 1.83118457e+03,
1.72250879e+03, 1.81733496e+03, 1.65188281e+03]

There's a large negative jump in group 2, which apparently used to get flagged as a CR by the old routine, but now it isn't anymore. This could be legit, in that it's no longer above the rejection threshold, given the fixes that've been made. My only concern is that it might also be indicating that something in the new routines is no longer detecting any negative jumps.

Of course the other thing I notice about that test is that two of the pixels - [2,2] and [4,4] - have all groups except the first 2 flagged as saturated, hence there's only 2 good groups to work with, resulting in 1 difference to work with, so how could even detect a jump using only 1 difference? Apparently the old routine could? But that doesn't seem right.

@cshanahan1
Copy link
Collaborator Author

Do we know why the 1 unit test (in "CI / jwst tests with coverage") is still failing? Is that a legitimate different result?

In this test, a test dataset is constructed and four pixels - ((1, 1), (2, 2), (3, 3), 4,4)) are given test scenarios to see if flagging for saturated pixels is done correctly.

The tests for pixels (1, 1) and (3, 3) did not have different results - while the internal calculations changed based on changes to the code, the resulting pixels flagged as jumps ended up being the same.

for the test case of pixel (2, 2), 5 of the 7 groups in the test data are set to be saturated, meaning there is only one non saturated diff so no jumps should be flagged. previously, it was incorrectly flagging the 3rd group as a jump when NO groups should have been flagged, so the truth value of this test has changed. I kept this test but changed the truth to assert that NO groups are flagged in this case.

pixel(4, 4) was the same case as (2, 2), where 5 out of 7 groups were saturated so no jumps should be flagged but they were being flagged. I removed this redundant example.

@cshanahan1 cshanahan1 force-pushed the clare_jump_refactor branch from 13568d7 to ece95c2 Compare January 7, 2022 17:54
Copy link
Collaborator

@hbushouse hbushouse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After a ridiculous amount of reviewing and testing, I hereby approve this PR.

Copy link
Collaborator

@tapastro tapastro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants