You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In recent runs, our implementation of CSIRO_depth (which flags any XBT level shallower than 3.6m) hasn't been making it into final QC decisions due to its high false positive rate; a run on 20k quota profiles, for example, produced TPR / FPR of 49.3% / 43.3%. However, I think we can get a lot more mileage out of this test in two ways:
About 10% of the final false negative rate of this run correspond to XBT profiles with a single near-surface measurement. If XBTs making measurements shallower than 3.6m are not to be trusted, we could limit this test to flagging these, at no cost to our false positive rate (actually, I see two profiles that would contribute to our false positive rate in this case, but that seems wrong - surely those should have been flagged if XBTs definitely don't work above 3.6m).
Perhaps we should treat XBT measurements shallower than 3.6m like we do wire-break levels: so trivially easy to identify that we should just mask them out before getting down to the more nuanced QC decisions.
From memory the QuOTA dataset replaces all XBT levels shallower than 3.6 m with fill values (@BecCowley can confirm) so that dataset won't be useful deciding on how good the test is. It's a standard thing to do, though, so I think we should do your second option and choose to apply the test to mask them out. The analyse_results.py code does this, based on the definitions in qctest_groups.csv.
I surprised that there are any XBTs at all with levels <3.6 m, though.
I think that in the Quota dataset, mostly the XBT data does have the upper 3.6m replaced, especially if it was QCd by CSIRO. There may be some that didn't fail any auto qc tests and were not looked at by a human that still have the upper 3.6m.
Having said that, you can retrieve this data. Assuming you are pulling the data from our original files. I think that from memory, Tim converted it to WOD ascii? If that's the case, the following may not apply.
In our Mquest netcdf files:
It is all in the netcdf files, kept in the history records. Look in the "Previous_val" variable - these are the temperatures that have been replaced. The "Aux_ID" field tells you the depths that the temps were replaced at and the "Act_code" indicates the flag. You want to target the 'CS' flags.
In recent runs, our implementation of
CSIRO_depth
(which flags any XBT level shallower than 3.6m) hasn't been making it into final QC decisions due to its high false positive rate; a run on 20k quota profiles, for example, produced TPR / FPR of 49.3% / 43.3%. However, I think we can get a lot more mileage out of this test in two ways:About 10% of the final false negative rate of this run correspond to XBT profiles with a single near-surface measurement. If XBTs making measurements shallower than 3.6m are not to be trusted, we could limit this test to flagging these, at no cost to our false positive rate (actually, I see two profiles that would contribute to our false positive rate in this case, but that seems wrong - surely those should have been flagged if XBTs definitely don't work above 3.6m).
Perhaps we should treat XBT measurements shallower than 3.6m like we do wire-break levels: so trivially easy to identify that we should just mask them out before getting down to the more nuanced QC decisions.
@s-good @BecCowley let me know any thoughts.
The text was updated successfully, but these errors were encountered: