-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pysot model.pth to onnx occurred error #125
Comments
anybody ever try convert pysot siamrpn pretrained xxxx.pth model format to other counterpart format like xxxx.onnx and is there any successful experience |
We override forward, which requires a dict as input. And the dict requires labels. I think you'll need to write a wrapper class which only takes |
@lb1100 thanks for response, do you mean write a new ModelBuilder class to replace the original one used in training and the rewrite ModelBuilder class only include 'template' and 'search' as input data of dict type for forward and output only give cls and loc predict tensor ? And it is merely for inference ? Also the downloaded pysot pretrained xxxx. pth could adopt this new write ModelBuilder class directly convert to xxxx.onnx or not? I confused why torch.onnx.export would call this override forward but demo.py not. Would I need other extra work if want convert trained xxxx.pth model to xxxx.onnx for inference purpose? |
During inference, we don't directly call |
do you solve this problem ? |
@rollben do you solve this problem ? |
sorry,I don't know~
…------------------ 原始邮件 ------------------
发件人: "root12321"<[email protected]>;
发送时间: 2019年8月12日(星期一) 中午11:40
收件人: "STVIR/pysot"<[email protected]>;
抄送: "Subscribed"<[email protected]>;
主题: Re: [STVIR/pysot] pysot model.pth to onnx occurred error (#125)
@rollben do you solve this problem ?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
@lb1100 In which case the override forward will be called directly .I don't quite make sense why the process of format conversion to onnx would call the override forward. where the input is a img e.g. torch tensor (1,3, 511,511) , and the tensor set as dummy_input for torch.onnx.export(model ,dummy_input, xxxx.onnx) .you know if the override forward is called , the error of course would happen , because the required data of override forward requires a dict not a tuple and so it can not find 'template' , so how to modify it to avoid conflict in inference. |
|
@lb1100 OK got it ,appreciated. |
@rollben do you want to use onnx-> trt ,do inference? |
Not limit to TRT, there is multiple choices depends on the hardware you wanna the model to populate. |
@rollben any suggestion? i just know tensorrt to accelerate model ,but i have some question, i dont know how to transform torch.view() to trt code and how to transform F.con2d() to trt code. |
I train SiamRPN++ with backbone with resnet50 and Multirpn mode and generate checkpoint_e1.pth and then I use the checkpoint_e7.pth to generate the onnx model through class of ConvertModel and Traceback (most recent call last): RuntimeError: Given groups=1, weight of size 256 256 3 3, expected input[1, 512, 15, 15] to have 256 channels, but got 512 channels instead Process finished with exit code 1 I am confused that that why no error happened during train period, but do at forward when convert the trained model of siamrpn_r50_l234_dwxcorr_16gpu of checkpoint_e7.pth to onnx format ? @lb1100 Do you have any idea about this weird problem? |
If you didn't modify my code, it seems that you need to add |
I think convert to trt will be a little difficult. Correlation layer is not supported. You will have to split the network in to several parts if you want to use trt. |
@lb1100 I solve it and thanks for your reminder. As for your mention of correlation layer is not yet supported by trt , besides the internal compute structure, I am also curious about whether the following warning would be a barrier of convert onnx to trt model ? Do you have same point of view or just thinks it doesn't matter , it's just a expected ultimate output for onnx model .so what's your opinion? Warning as follows:(when convert pytorch pth format model to onnx one) |
@lb1100 OK ,thank you |
@lb1100 |
why alexnet model size is not same to alexnetlegacy model size ,i saw the model structure both same ,why the model size is not same ? |
My experience: you should script the xcorr part and use the scripted model before tracing it to avoid warning. |
hi, did you transform pysot model.pth to onnx successfully? I have meet the same problem with you, could you please give me some help? email: [email protected] |
/pysot/pysot/core/xcorr.py:46: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! |
My code: from collections import OrderedDict parser = argparse.ArgumentParser(description='trans demo') cfg.merge_from_file(args.config) class ConvertModel(nn.Module):
载入预训练的型model0 = ModelBuilder() model0.eval() x = torch.randn(1, 3, 127, 127) torch_out = torch.onnx._export(model, (x,z), "model.onnx", export_params=True) |
Hello, Here is the code.
Can anyone help me with that? |
I have been following this thread, has anyone been able to successfully convert the model to tensorrt or onnx? @rollben did you get it to work, especially interested if you were able to convert tot trt. thanks |
change this, add opset_version = 11 ,maybe work |
pysot/pysot/models/neck/neck.py:25: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! |
torch_out = torch.onnx.export(net,dummy_input,"siamrpn_alex_dwxcorr.pth",export_params=True)
when export siamrpn_alex_dwxcorr.pth to onnx the error is occurred.
error position-> :template = data['template'].cuda()
builtins.IndexError: too many indices for tensor of dimension 4
why this happened and how to solve it
The text was updated successfully, but these errors were encountered: