You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to convert the 3DCNN model to onnx,but the inference results by onnx and pytorch are different.I tried the backbone of resnet18 and mobilenet.Did anybody meet the same problem?
Environment:
CUDA 10.0
CUDNN 7
TENSORRT 7
PYTORCH 1.2.0
CODE:
model, parameters = generate_model(opt)
checkpoint = torch.load('my_mobilenet_1.0x_RGB_10_checkpoint.pth')
model.load_state_dict(checkpoint['state_dict'])
print('load checkpoint')
if isinstance(model, torch.nn.DataParallel):
model = model.module
x = torch.ones((1, 3, 10, 128, 128)).cuda()
y = model(x)
print(y)
torch.onnx.export(model, x, '3dcnn.onnx', verbose=True)
The text was updated successfully, but these errors were encountered:
@okankop After two years, still didn't try on exporting to onnx ? I saw several questions about low accuracy of pretrained models, didn't do any update?
I tried to convert the 3DCNN model to onnx,but the inference results by onnx and pytorch are different.I tried the backbone of resnet18 and mobilenet.Did anybody meet the same problem?
Environment:
CUDA 10.0
CUDNN 7
TENSORRT 7
PYTORCH 1.2.0
CODE:
The text was updated successfully, but these errors were encountered: