-
-
Notifications
You must be signed in to change notification settings - Fork 16.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What parameters should be set for enhancing training with existing model? #10037
Comments
👋 Hello! Thanks for asking about improving YOLOv5 🚀 training results. Training and deployment should be done at similar inference size, i.e. train at 640, detect at 640, or train at 1920, detect at 1920. Most of the time good results can be obtained with no changes to the models or training settings, provided your dataset is sufficiently large and well labelled. If at first you don't get good results, there are steps you might be able to take to improve, but we always recommend users first train with all default settings before considering any changes. This helps establish a performance baseline and spot areas for improvement. If you have questions about your training results we recommend you provide the maximum amount of information possible if you expect a helpful response, including results plots (train losses, val losses, P, R, mAP), PR curve, confusion matrix, training mosaics, test results and dataset statistics images such as labels.png. All of these are located in your We've put together a full guide for users looking to get the best results on their YOLOv5 trainings below. Dataset
Model SelectionLarger models like YOLOv5x and YOLOv5x6 will produce better results in nearly all cases, but have more parameters, require more CUDA memory to train, and are slower to run. For mobile deployments we recommend YOLOv5s/m, for cloud deployments we recommend YOLOv5l/x. See our README table for a full comparison of all models.
python train.py --data custom.yaml --weights yolov5s.pt
yolov5m.pt
yolov5l.pt
yolov5x.pt
custom_pretrained.pt
python train.py --data custom.yaml --weights '' --cfg yolov5s.yaml
yolov5m.yaml
yolov5l.yaml
yolov5x.yaml Training SettingsBefore modifying anything, first train with default settings to establish a performance baseline. A full list of train.py settings can be found in the train.py argparser.
Further ReadingIf you'd like to know more a good place to start is Karpathy's 'Recipe for Training Neural Networks', which has great ideas for training that apply broadly across all ML domains: http://karpathy.github.io/2019/04/25/recipe/ Good luck 🍀 and let us know if you have any other questions! |
👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs. Access additional YOLOv5 🚀 resources:
Access additional Ultralytics ⚡ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐! |
Search before asking
Question
We are trying to enhance the distributed yolov5s.pt model (we are using tagged 6.0 code, #5141, https://github.com/ultralytics/yolov5/releases/tag/v6.0) with additional training using some focused images. We have performed inferencing, using the distributed yolov5s.pt, and got quite a number of detections of objects on a set of images within our domain.
We are now executing train.py adding an additional ~220 images with labeled objects for some subset of COCO 80 classes. We executed the train.py as follows:
python train.py --img 640 --batch 64 --data dataset.yaml --weights "weights/yolov5s.pt" --epochs 300 --device 0 --freeze 10
Note that the frames have resolutions greater then 640 ( 1520 and 1920). We are using the distributed hyp.scratch.yaml parameters. We tried the “freeze 10” to freeze the backbone, which we understand freeze the feature extraction layers. Results:
The resulting model, when used in inferencing, produces considerably less object detections then the original yolov5s model.
Questions:
• Is there something, fundamentally, that we are doing wrong with the configuration of train.py?
• Does the “–img 640” need to be set to this because of the testing with yolov5s at 640? Or can we increase this to our min resolution?
Additional
No response
The text was updated successfully, but these errors were encountered: