-
Notifications
You must be signed in to change notification settings - Fork 571
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
convert to ONNX #371
Comments
|
@siriasadeddin I noticed the performance degradation while converting nn.SyncBatchNorm this way, not sure why. Also output value for pytorch model and onnx model is changed. |
Hi! can you tell me how did you test it so I can also try it.
|
@RohitKeshari Actually, It seems like that this code is not working for the newest branch now. Here is my solution:
And you also need to change the class Detect in yolo.py after you finished the training process:
And you can see the same results both in Pytorch and onnxruntime. Now, you can use: to generate the tensorRT engine and finish the inference. |
if you only want to use the onnx model, you can added the postprocess into the onnx model as below. But it will generate some ScatterND operations and cause some problem in TensorRT.
Now your onnx output shape is 1output_bboxes(class_num + 5), just take the output_bbox and use nms, you will get the final detection result. |
Hi, Im trying to convert output model of ScaledYOLOv4 to ONNX but i faced with error:
.
.
.
.
) # /yolov4/models/yolo.py:38:0
%603 : Float(1, 3, 18, 18, 12, strides=[11664, 3888, 216, 12, 1], requires_grad=1, device=cpu) = onnx::Transposeperm=[0, 1, 3, 4, 2] # /yolov4/models/yolo.py:38:0
return (%output, %583, %603)
I used models/export.py functio.
The text was updated successfully, but these errors were encountered: