-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can the model be converted into ONNX to accelerate inference? #92
Comments
Which aspect are you mostly interested in? The HMR2.0 network? The general single-image demo? Or the video tracking code? |
Hi @geopavlakos I am very interested in video tracking for HMR2.0, so I want to achieve real-time camera tracking. Can use ONNXRuntime for real-time run or other methods to accelerate it? Thank you |
If you want to use only the HMR2.0 network, that should require ~32ms for a single forward pass on an RTX 3090 (potentially faster on more recent hardware). If the HMR2.0 network is part of a more elaborate pipeline (e.g., supporting video/camera tracking), then there are more components to be considered. I'm not familiar with ONNX, so I cannot tell for sure if that would help. |
Hi, @geopavlakos, what time will the general single-image demo take? Can this pipeline achieves 30 fps? Thank you. |
Hello, @shubham-goel @geopavlakos
Currently, inference is too slow. Is there any way to achieve real-time 30fps?
The text was updated successfully, but these errors were encountered: