-
-
Notifications
You must be signed in to change notification settings - Fork 16.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The autoanchor result is not same as anchors in model? #6966
Comments
It seems only P4/16 are same. |
Hi @PonyPC Could you try the following method to compute the num_anchors = checkpoint_yolov5.model[-1].anchors.shape[1]
anchor_grids = (
(checkpoint_yolov5.model[-1].anchors * checkpoint_yolov5.model[-1].stride.view(-1, 1, 1))
.reshape(1, -1, 2 * num_anchors)
.tolist()[0]
) |
@zhiqwang Hi, I got same result as above |
You show more than one result above, I can't determine which one it is, is it convenient to post a more detailed result? |
@zhiqwang This is what your code output |
Hi @PonyPC Got it, the result in "anchors.yaml" should be calculated from the coco dataset, I guess the result here is normal. The actual computation will automatically determine the actual anchors computed by the AutoAnchor mechanism: - [18.296875,17.03125, 12.8828125,33.625, 26.46875,20.296875] # P3/8
- [44.90625,90.375, 85.8125,51.71875, 58.375,81.875] # P4/16
- [100.625,388.5, 381.25,132.25, 139.75,379.5 # P5/32 |
@zhiqwang
|
I assume it should be |
@PonyPC YOLOv5 🚀 anchors are saved as Detect() layer attributes on model creation, and updated as necessary by AutoAnchor before training starts. Their exact location is here: Line 45 in f17c86b
You can examine the anchors of any trained YOLOv5 model like this: Inputimport torch
# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False) # official model
model = torch.hub.load('ultralytics/yolov5', 'custom', 'path/to/best.pt', autoshape=False) # custom model
# Anchors
m = model.model.model[-1] # Detect() layer
print(m.anchors * m.stride.view(-1, 1, 1)) # print anchors OutputYOLOv5 🚀 2021-11-22 torch 1.10.0+cu111 CUDA:0 (Tesla V100-SXM2-16GB, 16160MiB)
# x y
tensor([[[ 10., 13.],
[ 16., 30.],
[ 33., 23.]], # P3/8-small
[[ 30., 61.],
[ 62., 45.],
[ 59., 119.]], # P4/16-medium
[[116., 90.],
[156., 198.],
[373., 326.]]], dtype=torch.float16) # P5/32-large ExampleGood luck 🍀 and let us know if you have any other questions! |
I know what you mean. But there are different in my case which you circled it at last image. |
@PonyPC 👋 hi, thanks for letting us know about this possible problem with YOLOv5 🚀. We've created a few short guidelines below to help users provide what we need in order to get started investigating a possible problem. How to create a Minimal, Reproducible ExampleWhen asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:
For Ultralytics to provide assistance your code should also be:
If you believe your problem meets all the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template with a minimum reproducible example to help us better understand and diagnose your problem. Thank you! 😃 |
Search before asking
Question
v6.1
The autoanchor result is not same as anchors in model.
I trained my data with autoanchor, and get this:
After training, I print anchor from model but get:
Only P4 same.
Additional
I got the log from
models\hub\anchors.yaml
but P3/P4/P5/P6 are same:I also print the offical release model
yolov5n.pt
, both anchors in model and trainning autoanchor prints are same.Is it normal?
The text was updated successfully, but these errors were encountered: