-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cannot find spatial branch in model code #10
Comments
Hi, @inderpreet-adapdix Fusion_block (Spatial branch) is not used in the classification task. We've deleted this class. It is referenced in the segmentation code.
|
Thanks for the reply @wwqq, I am actually using CPU and was able to get inference done on CPU for a classification model, I am not sure how to do it for the segmentation model, can you help me with it I want to evaluate seaformer-small for segmentation on CPU. |
The convert file is uploaded. https://github.com/fudan-zvg/SeaFormer/blob/main/convert2onnx.py |
Hi @wwqq I followed the link you mentioned, I am doing inference for seaformer small model in jupyter notebook, I am using model SeaFormer-S_512x512_4x8_160k, I have initialized the model using config present in seaformer_small.py and loaded the weights. I am passing it an image from ADE20K dataset after resizing it to (512, 512) and result I am getting is of (64, 64). I have done bilinear interpolation to convert into size (512, 512) and mapped each class to a RGB color, the output image I am getting doesn't seem to be good and it seems image is not getting segmented. I have updated Conv2d_BN to use nn.BatchNorm2d instead of build_layer_norm. Thanks |
Hi @inderpreetsingh01 2.Label_colors need to generate 150 classes not 125. You can import it from the checkpoint directly. 3.Same here, nc=150. And rgb_image should transpose not reshape. |
Thanks a lot, @wwqq for the clear explanation. I have one general doubt why the output seems a bit noisy, is there any way to improve it for the same checkpoint? |
Hi @inderpreetsingh01, thanks for your interest in our work. |
Hi @speedinghzl thanks for reply, I have normalized the image now using mean=[123.675, 116.28, 103.53] and std=[58.395, 57.12, 57.375], also I was using model in training mode I converted it to eval mode and below are the output I got. It seems better than the previous one and less noisy. |
Hi, @inderpreetsingh01 Yes, they are the expected outputs. |
Hi,
I was looking into model implementation and found that Fusion_block in seaformer.py file is not returning anything in its forward pass
SeaFormer/seaformer-cls/seaformer.py
Lines 338 to 368 in db38fe7
also there seems to be error in its implementation:
While debugging the forward pass for seaformer-small model I could not find the use of the Fusion block and model weights for fusion block are also not present in checkpoint. It seems like model implementation only has shared stem and context branch.
Can you please help me with these issues if I m missing something?
Thanks
The text was updated successfully, but these errors were encountered: