-
-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Darknet Polynomial LR Curve #18
Comments
yolov3 uses 'steps' policy to adjust the learning rate. At the end of the training |
Ohhhhh. I read about the polynomial lr curve in the v2 paper, I thought it was carried over to v3. I'll implement the steps policy from the cfg file instead. But something is odd. I thought yolov3 was trained to 160 epochs, but maybe not. It looks like in yolov3.cfg |
Could you point out where have the authors specified the epoch number in yolov3 paper (or somewhere else)? I might have missed that. |
Section 3 of the yolov2 paper (aka yolo "9000") has many training details. v3 paper is completely missing details though, this is why everyone is so confused translating it to pytorch. I think I finally found the right loss function to use though, my latest commit can continue training at https://pjreddie.com/media/files/papers/YOLO9000.pdf |
Ah I forgot to mention, in the spirit on this issue, I've implemented the correct yolov3 step lr policy now. This assumes 68 total epochs, and 0.1 lr drops at 80% and 90% completion, just like the cfg. Lines 106 to 114 in 7416c18
|
Ahh, they probably did not use the same training config in yolov3. I hope the training converges with the new loss term. Btw, you referenced the training of the classification network, not the detection network. The detection training in yolo2 should be We train the network for 160 epochs with a starting learning rate of 10−3, dividing it by 10 at 60 and 90 epochs. We use a weight decay of 0.0005 and momentum of 0.9. We use a similar data augmentation to YOLO and SSD with random crops, color shifting, etc. We use the same training strategy on COCO and VOC. |
It seems good to schedule the learning rate with the total number of epochs. You probably already know that but the darknet schedules the learning rate with the total number of batches processed during the training. I am not sure which one is the better practice, although both methods will give the same result for the standard .cfg file. |
@okanlv yes darknet tracks total batches, with 16 images per batch. I tracked the epochs instead. There's probably not much effect one way or the other. |
I found darknet's polynomial learning rate curve here:
https://github.com/pjreddie/darknet/blob/680d3bde1924c8ee2d1c1dea54d3e56a05ca9a26/src/network.c#L111
If I use
power = 4
fromparser.c
then I plot the following curve (in MATLAB), assumingmax_batches = 1563360
(160 epochs at batch_size 12, for 9771 batches/epoch). This leaves the finallr(1563360) = 0
. This means that is is impossible for anyone to begin training a model from the official YOLOv3 weights and expect to resume training atlr = 0.001
with no problems. The model is going to clearly bounce out of its local minimum back into the huge gradients it first saw at epoch 0.The text was updated successfully, but these errors were encountered: