-
Notifications
You must be signed in to change notification settings - Fork 708
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add per-image overlap (pimo) #1220
add per-image overlap (pimo) #1220
Conversation
Other features (like choosing and plotting representative cases in the distribution of per-image metrics) coming up as well as unit tests for this. |
About the binclf speed, I will also test with It could further speedup but it adds a new dependency, is it ok? (a priori) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please ignore this one, it will disappear at some point as i clean my code
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
@samet-akcay could you launch the tests here, plz? |
Looks like there is an issue with the example notebook I added because it's saved with my local kernel name. What's an easy solution for this?
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you could show some plots of the PImO curves in this notebook. You could use these plots to explain some of the limitations of the set metrics and why it is important to also look at the metrics on the image level. This would help others to see the added value of your metric.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I forgot to mention, but i'm adding more things in other branches to make smaller PRs, and i'm incrementing this notebook (like the things you said).
* New printing stuff * Remove dead code + address codacy issues * Refactor try/except + log to comet/wandb during runs * pre-commit error * third-party configuration --------- Co-authored-by: Ashwin Vaidya <[email protected]>
* ignore mask check when dataset has only normal samples * update changelog
Revert "🚚 Refactor Benchmarking Script (openvinotoolkit#1216)" This reverts commit 784767f.
* Fix metadata path * Update benchmarking notebook
…:jpcbertoldo/anomalib into jpcbertoldo/gsoc23-segementation-metrics
I did a refactor, moved to anonther branch. New PR: #1247 |
Description
Per-Image Overlap (PImO)
Adding the metric Per-Image Overlap (PImO) and its area under the curve (AUC).
At each given threshold:
I choose the vocabulary 'overlap' insted of TPR because it echoes with the Per-Region Overlap (PRO).
binclf curve
The per-image overlap (tpr) needs the binary classification matrix curve (binclf curve), which has to be computed per image , while they must share the thresholds (Since the x-axis (FPR) is shared).
You'll find a weird-looking funciton using itertools to compute thos binclf curves, which i found to be much faster on CPU than the two alternatives in torchmetrics (the latter mentioned in the former's doc) -- image below shows some WIP comparisons; a proper version on the way.
However, there is a
tensor.cpu().numpy()
in there, and I only tested with tensors already in the CPU.Perhaps when using tensors in the GPU that operation might make the fastest algorithm change?
Changes
Checklist