You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Although users are explicitly instructed not to mix up annotations, it seems prudent to have RATs at least produce some warnings.
This can be done
at the end of the run, leveraging NA in certain fields
or at the beginning of the run as a thorough pre-check of ID sets across the inputs and lead to aborting the run, potentially with a force option to override the abort.
The text was updated successfully, but these errors were encountered:
I've decided it is best to encourage good practices instead of trying to clean up after bad ones.
Therefore, any mismatch of transcript IDs between the annotation and the quantifications will now result in aborting the run.
I remain unsure still whether to allow an explicit override. It would certainly simplify updating the unittests, as the testing dataset explicitly contains cases with inconsistent annotation that make the new abort condition cause all the tests to fail.
- Explicitly check the transcript IDs in the annotation and
quantifications for inconsistencies, and abort if any are found. - Add
abort override option for special use cases.
- Update docs and tests
- Tidy up the input check tests, break them into smaller tests
- Get rid of obsolete and almost certainly broken functions for data
simulation.
In response to #49 .
Although users are explicitly instructed not to mix up annotations, it seems prudent to have RATs at least produce some warnings.
This can be done
NA
in certain fieldsThe text was updated successfully, but these errors were encountered: