-
-
Notifications
You must be signed in to change notification settings - Fork 16.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hubconf won't process different height and width image size using exported models (detect.py works, pt models work) #9039
Comments
👋 Hello @dmccorm2, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution. If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available. For business inquiries or professional support requests please visit https://ultralytics.com or email [email protected]. RequirementsPython>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started: git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit. |
@dmccorm2 CoreML models are only capable of inference at a fixed size, PyTorch models are capable of dynamic input sizes. We don't have CI in place for CoreML export and PyTorch Hub inference at various sizes so this feature may not exist. The fastest and easiest way to incorporate your ideas into the official codebase is to submit a Pull Request (PR) implementing your idea, and if applicable providing before and after profiling/inference/training results to help us understand the improvement your feature provides. This allows us to directly see the changes in the code and to understand how they affect workflows and performance. Please see our ✅ Contributing Guide to get started. |
@dmccorm2 the preprocess step in the AutoShape forward method would be the place to debug and modify for your use case: Lines 609 to 632 in f258cf8
|
May resolve #9039 Signed-off-by: Glenn Jocher <[email protected]>
@glenn-jocher This worked great for me - am able to leverage Autoshape with a specified different w,h tuple and works on CoreML export (as long as it matches the same model w,h obviously) |
* Two dimensional `size=(h,w)` AutoShape support May resolve #9039 Signed-off-by: Glenn Jocher <[email protected]> * Update hubconf.py Signed-off-by: Glenn Jocher <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: Glenn Jocher <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
@dmccorm2 great, PR is merged. Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐ |
* Two dimensional `size=(h,w)` AutoShape support May resolve ultralytics#9039 Signed-off-by: Glenn Jocher <[email protected]> * Update hubconf.py Signed-off-by: Glenn Jocher <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: Glenn Jocher <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Search before asking
YOLOv5 Component
Detection, PyTorch Hub
Bug
When exporting a model with different height and width for imgsz, for example, 2560 x 1440 (which becomes 1472):
python export.py --weights yolov5m6.pt --include coreml --device=cpu --imgsz 1440 2560
I can run detect.py without issue:
python3 detect.py --imgsz 1472 2560 --weights=./yolov5m6.mlmodel --source=./myvideo.mp4 --classes 0
However if I load the model from Pytorch Hub:
Using v6.2
model = torch.hub.load('ultralytics/yolov5:v6.2', 'custom', '~/yolov5/yolov5m6.mlmodel', force_reload=False)
Then load the video using OpenCV:
I get the following error, as it defaults to the 640 size:
However, if I use the provided PT model:
Works fine.
If I try specifying size as an array, as you can do in detect.py using the --imgsz flag. I get this trace:
If I specify size as an integer using either
1472
or2560
in this example, it tries to run the model with a scaled size x size image which fails the exported model checkI'm expecting to omit size or provide an array to run a different height and width source on the exported model via pytorch hub successfully as detect.py does without issue.
Environment
Minimal Reproducible Example
python export.py --weights yolov5m6.pt --include coreml --device=cpu --imgsz 720 1280
Verify detect.py works (768 to match the max stride calculated from the export)
python3 detect.py --imgsz 768 1280 --weights=./yolov5m6.mlmodel --source=./data/images/zidane.jpg
image 1/1 ~/yolov5/data/images/zidane.jpg: 768x1280 2 persons, 1 tie, 73.3ms
In python shell
Expect to see
results = model(im, size=[768, 1280])
Error:
results = model(im, size=768)
Remove size and revert to the provided model
model = torch.hub.load('ultralytics/yolov5:v6.2', 'yolov5m6')
results = model(im)
now works with results.print()Additional
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: