Skip to content
This repository has been archived by the owner on Jan 13, 2022. It is now read-only.

Trained model tend to give too high a modified probability for each candidate base #63

Open
weir12 opened this issue Dec 1, 2019 · 0 comments

Comments

@weir12
Copy link

weir12 commented Dec 1, 2019

Hi:
I've trained a model.To evaluate the performance of the model, I used unused experimental samples and control samples as test sets.
As a result, I find that the model tends to give higher modified probability for each candidate base.
What's worse,The control sample (without any modified base) was also incorrectly identified with a large number of modified bases.
If this is a hyperparameters problem for the model architecture,I wish someone would give me some advice.

@weir12 weir12 changed the title Trained models tend to give too high a modified probability value for each candidate base Trained model tend to give too high a modified probability for each candidate base Dec 2, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant