Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Query Regarding CellViT Weight Compatibility for PanNuke and MoNuSeg Patches #77

Open
MarkonH opened this issue Dec 6, 2024 · 2 comments

Comments

@MarkonH
Copy link

MarkonH commented Dec 6, 2024

I hope you're doing well.

I have a question regarding the use of the provided CellViT weights. Are these weights compatible for evaluation on PanNuke or MoNuSeg patches directly, or are they strictly designed for use with WSI data?

If the weights cannot be used for PanNuke or MoNuSeg patches, would it be possible to provide the corresponding weights and configuration files for patch inference on these datasets?
截屏2024-12-06 下午4 47 38

Thank you for your time and assistance!

Best regards.

@FabianHoerst
Copy link
Collaborator

Hi,

They cannot be used for PanNuke, as these checkpoints have not been trained on the official splits and are intended for inference. If you need checkpoints for each split, please look into the logs folder. There are the configuration files to train the networks on your own.

Best, Fabian

@MarkonH
Copy link
Author

MarkonH commented Dec 13, 2024

Thank you for the response and the insights. I understand the suggestions, and I’ve tried training with the provided configurations in logs folder. However, I’m still encountering an issue with my dataset, which contains only two classes of nuclei. After converting it to match the PanNuke dataset format, the model only predicts one class well, and the other class is output as nan. The classification results look fine when visualized, but the segmentation result for one class is nearly blank.
截屏2024-12-13 下午5 39 13
I noticed that the PanNuke dataset code includes weights for five classes, which I assume are assigned for each class. Therefore, I tried modifying these weights to (4, 5) and (1, 10), as the second class has more instances than the first. However, the less frequent class still results in nan.
I also experimented by changing the sampling_strategy to random, essentially disabling the weight adjustments, but the problem persists with the second class still being output as nan.

binary_weight_factors = np.array([4191, 4132, 6140, 232, 1528])

Have you encountered this issue before, or do you have any suggestions on how to modify the setup to resolve this? Any advice would be much appreciated!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants