-
Notifications
You must be signed in to change notification settings - Fork 8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scaled-YOLOv4: Scaling Cross Stage Partial Network #7087
Comments
@AlexeyAB Great job sir. When I have a time, I will read the paper. Can we train a custom model with Scaled-YOLOv4? |
Yes, sure, you can. Just use the latest version of Darknet. |
Thanks :) I will try it |
@AlexeyAB Very interesting! Great progress. Just to confirm:
|
@AlexeyAB congrats ! same Q as @marvision-ai , did the yolov4-tiny model weights changed or is it the same ? |
Impressive work, congrats Aleksey @AlexeyAB, to you and the authors. |
@YashasSamaga Hello, does opencv support the new networks? |
I have added an explanation of the necessary fixes, so we are waiting for the fix: |
Yes. I uploaded new weights files yolov4-tiny.weights and yolov4-tiny.conv.29 . |
I assume I can use this model with tiny-3l-spp cfg? SPP is still useful for this one correct? |
@AlexeyAB could we have weights and updated cfg for yolov4-tiny-3l.cfg as well? Paper reports 28.7% AP (+6% to tiny) with same FPS on TX2. |
You can use this weight to train your custom tiny-3l-spp: https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.conv.29 |
@AlexeyAB Thank you. |
Which cfg file I have to use for Scaled-YOLOv4? |
Hello, AlexeyAB. Line 278 in c5b8bc7
Shouldn't it be mask=0,1,2? |
First of all, thanks you for you great work. For yolov4-csp and yolov4x-mish I used the cfg and weights files you gave on the first post. For the original v4 weights and original v4 config: I used the ones you provided like two months ago. Original V4 512 Two things jumped out at me:
A last thing in which I'm less interested is that the original v4 detect the whole crowd as a person but mish and csp did not (which is suitable for my point of view) |
@Hyvenos Just try to use lower confidence threshold for csp and x-mish models, |
Hi tried it, without improvements unfortunatelly. Here are the results Even when lowering the threshold, csp and mish struggle to detect the tinyest objects, and when doing they output the wrong category. |
@Hyvenos
So:
If you want to detect small objects using new models csp/x-mish, then its better to fine-tune
Also if you want to detect objects in crowds, then train with |
@AlexeyAB Can you elaborate a bit more on why it is better to set |
|
@AlexeyAB This is the kind of thing that should be documented in the wiki instead of an issue. This way people looking up details on parms can find it much easier than browsing through all the issues. |
@AlexeyAB Thanks for the clarification. To train for both small and large objects, is it still recommended to use the aforementioned modifications with yolov4-csp? |
yolov4-tiny don't use iou_thresh. There is very low capacity in the yolov4-tiny to train 3 instead of 1 anchor per 1 objects. |
@AlexeyAB I am trying to test the scaled model on a video with below command- But I am getting following message and demo screen is hanged- The same command with Yolov4 cfg and weights files works without any issue. Is there any change required for scaled version of cfg and weight files? |
@manoj-8246 This is not related to YOLO. This is related to OpenCV and your video file, better to ask it there https://answers.opencv.org/questions/ |
|
Thanks again @AlexeyAB for your great work. If I understand the graph correctly there is no yolov4-tiny versions in the graph. I would also like to see "previous" yolov4-tiny models and their variants compared to scaled-yolov4 variants. |
@RasmusToivanen There is the same yolov4-tiny.cfg model in the Darknet and Scaled version https://github.com/WongKinYiu/ScaledYOLOv4/tree/yolov4-tiny
There is comparison fo AP50 for: YOLOv4-CSP vs YOLOv4 vs YOLOv4-tiny |
HI @AlexeyAB, For improving the detection of small objects on yolov4, you mentioned 3 changes for normal Yolov4 cfg file on the readme section i-e: Line 895 in 6f718c2
set stride=4 instead of Line 892 in 6f718c2
set stride=4 instead of Line 989 in 6f718c2
I tried it in past for yolov4.cfg, it results in some better outcomes, but when I did some same changes in yolov4-CSP.cfg, it's giving an error while loading network at training time, seems I did a mistake .... input shape mismatch at some layer, Can you please suggest, if we want to make same changes for Scaled yolov4 (CSP) then what exactly changes I have to do? Thanks in advance. |
@AlexeyAB: Thank you for adding the support of Scaled-YOLOv4 in this repo. Great job! I've read the Scaled-YOLOv4 paper. I'm interested in trying YOLOv4-P5, YOLOv4-P6 and YOLOv4-P7 on my custom dataset. By reading your notes above, there are models such as YOLOv4-CSP, YOLOv4x-MISH, and YOLOv4-CSPx-P7 (1536x1536). I have a couple of questions that I hope you can help:
I look forward to your reply. Thank you in advance for any information you may provide. |
@AlexeyAB: hi, |
SO as you said remove for this readme file for detecting small objects you haven't removed 'iou_loss=ciou' I had one question for detecting small + big objects for custom data is there any other parameters we need to consider ?? Thanks in advance. @WongKinYiu @AlexeyAB |
There is fixed a bug WongKinYiu/ScaledYOLOv4#89 in Scaled-YOLOv4-CSP And performance is improved (47.8 AP -> 48.7 AP). |
I hope you don't mind me asking, but what would your recommended modifications be to v4-tiny-3l that we've already modified to fit our custom data. We are streaming video and we need the low latency / high FPS hence the need for tiny, but we would like to improve:
Our AP and AP50 values are very high, consistency and accuracy are through the roof outside of these cases (long distance and multiple objects). Unfortunately I see most of what you've advised for small objects / multiple objects and some of the comments you've made in this thread don't apply to tiny, especially tiny-3l. |
I added new models: #7414
|
This allows changing the behavior of the network while training. Setting it to 1 (disable) will have the standard YOLO behavior: match only the best anchor with a truth. However, setting it to a lower value will make any anchor with an IoU > iou_anchor_threshold with a ground truth be updated as well. See this thread for more details: AlexeyAB/darknet#7087
Hi everyone, Looking at the cfg files, I can see some differences between yolov4-csp and yolov4-mish. But I can't really picture the changes. What is the main idea behind the mish compare to csp? |
main idea: |
Thanks but I'm asking about yolov4-csp vs yolov4-mish, not yolov4 vs yolov4-csp. |
YOLOv4 = CSPDarknet(Mish) + PAN(LReLU) |
Thank you! |
Hello, where is the paper? |
Hello, did you find the solution to your question? I am in the same situation, but using pytorch. |
Dear AlexeyAB, is the pre training models of scale-yolov4 models, such as yolov4-p5 and yolov4-tiny, trained in Imagenet2012? And, in your scale-YOLOv4 paper, you said that did not use ImageNet pre-trained models. So, I want to know what dataset do you use in pre training models of yolov4-tiny? Hope to get your letter! |
Scaled-YOLOv4: Scaling Cross Stage Partial Network - The best neural network for object detection (Top1 accuracy on MS COCO dataset)
Scaled YOLO v4 is the most accurate neural network (55.8% AP Microsoft COCO) among neural network published. In addition, it is the best in terms of the ratio of speed to accuracy in the entire range of accuracy and speed from 15 FPS to 1774 FPS. We show that YOLO and Cross-Stage-Partial (CSP) Network approaches are the best in terms of both absolute accuracy and accuracy-to-speed ratio.
Video: https://youtu.be/YDFf-TqJOFE
Paper (CVPR 2021): https://openaccess.thecvf.com/content/CVPR2021/html/Wang_Scaled-YOLOv4_Scaling_Cross_Stage_Partial_Network_CVPR_2021_paper.html
Medium: https://alexeyab84.medium.com/scaled-yolo-v4-is-the-best-neural-network-for-object-detection-on-ms-coco-dataset-39dfa22fa982?source=friends_link&sk=c8553bfed861b1a7932f739d26f487c8
Pytorch: YOLOv4-CSP, YOLOv4-P5, YOLOv4-P6, YOLOv4-P7 (use to reproduce results): https://github.com/WongKinYiu/ScaledYOLOv4
Darknet: YOLOv4-tiny, YOLOv4-CSP, YOLOv4x-MISH: https://github.com/AlexeyAB/darknet
Models:
For Training (yolov4-csp.cfg, yolov4x-mish.cfg, yolov4-p5.cfg, yolov4-p6.cfg) - change these lines before each of 3 for p5 (of 4 for p6)
[yolo]
-layers:darknet/cfg/yolov4-p5.cfg
Lines 1810 to 1811 in 9a86fce
filters=<(5 + num_classes) x 4>
activation=logistic
- for training and detection by using Darknet: https://github.com/AlexeyAB/darknetactivation=linear
- for training and detection by using Pytorch Scaled-YOLOv4 (CSP-branch): https://github.com/WongKinYiu/ScaledYOLOv4/tree/yolov4-cspFor training use pre-trained weights:
yolov4-p5.cfg
https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-p5.conv.232yolov4-p6.cfg
https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-p6.conv.289Currently Pytorch is more suitable for training on multiple-GPUs.
The text was updated successfully, but these errors were encountered: