-
-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimizer Choice: SGD vs Adam #4
Comments
See #2 (comment) for possible SGD warm-up requirements. |
@glenn-jocher In the official darknet code, the burn_in config is defined here: If current batch_num < burn_in, the learning rate would be scaled based on the value of burn_in: |
@xyutao thanks for the links. This looks like a fairly easy change to implement. I can go ahead and submit a commit with this. Have you tried this out successfully on your side? |
@xyutao From your darknet link I think the correct burnin in formula is this, which will slowly ramp up the LR to 1e-3 after 1000 iterations and leave it there: # SGD burn-in
if (epoch == 0) & (i <= 1000):
power = ??
lr = 1e-3 * (i / 1000) ** power
for g in optimizer.param_groups:
g['lr'] = lr I can't find the correct value of power though. I tried with I see that the divergence is in the width and height losses, the other terms appear fine. I think one problem may be that the width and height terms are bound at zero at the bottom, but are unbound at the top, so its possible that the network is predicting impossibly large widths and heights, causing the losses there to diverge. I may need to bound these or redefine the width and height terms and try again. I used a variant of the width and height terms for a different project that had no divergence problems with SGD. |
@glenn-jocher The default value of power is 4. See: |
Closing this as SGD burn-in has been successfully implemented. |
although I know this is closed, we exclusively use Adam for training with our fork of this repo, it instantly took us from a precision on our dataset of 20% to 85% (with slight mAP increases as well) |
@kieranstrobel that's interesting. Have you trained COCO as well with Adam? We tried Adam as well as Adabound recently, but observed performance drops on both on COCO. What LR did you use for Adam vs SGD? |
@kieranstrobel I ran a quick comparison using our small coco dataset # Optimizer
optimizer = optim.Adam(model.parameters(), lr=hyp['lr0'], weight_decay=hyp['weight_decay'])
# optimizer = AdaBound(model.parameters(), lr=hyp['lr0'], final_lr=0.1)
optimizer = optim.SGD(model.parameters(), lr=hyp['lr0'], momentum=hyp['momentum'], weight_decay=hyp['weight_decay'], nesterov=True) The training command was:
BTW, the burn-in period (original issue topic) has been removed because the wh-divergence issue is now resolved due to GIoU loss replacing the four individual regression losses (x, y, w, h). This example scenario above should actually favor Adam, since Adam is known for reducing training losses moreso than validation losses (and then failing to generalize well), because the dataset trains and validates on the same images, but SGD still clearly outperforms it. Can you plot a comparison using your custom dataset? |
I see xNets https://arxiv.org/pdf/1908.04646v2.pdf uses Adam at 5E5 LR in their results, so I did a study again of Adam results on the first epoch of COCO at 320. The results show lowest validation loss and best mAP (0.202) at 9E-5 Adam LR. This exceeds the 0.161 SGD mAP after the same 1 epoch. The validation losses were also lower with Adam:
I will try to train to 27 epochs with Adam at this LR next. for i in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 # iE5 Adam LR
do
python3 train.py --epochs 1 --weights weights/darknet53.conv.74 --batch-size 64 --accumulate 1 --img-size 320 --var ${i}
done
sudo shutdown |
@xuefeicao yes, for both. Search train.py for weight_decay. |
Got it, thanks!
…On Fri, Dec 6, 2019 at 3:19 PM Glenn Jocher ***@***.***> wrote:
@xuefeicao <https://github.com/xuefeicao> yes, for both. Search train.py
for weight_decay.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#4?email_source=notifications&email_token=AFSMXV2N53ZLS4YPNI75443QXKXTLA5CNFSM4FTDTCA2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGFHSHI#issuecomment-562723101>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AFSMXV5G7SUWEBVVOY3FFETQXKXTLANCNFSM4FTDTCAQ>
.
|
This issue is stale because it has been open 30 days with no activity. Remove Stale label or comment or this will be closed in 5 days. |
@glenn-jocher does this experimental results aply to this repo now,when i see the optimizer is also SGD ? |
@nanhui69 yes, but I would recommend yolov5 for new projects. |
@glenn-jocher Do you happen to know if YOLOv5 has the same issue with Adam performing better than the default SGD? |
@danielcrane I don't know, but you can test Adam out on your own training workflows by passing the --adam flag (make sure you reduce your LR accordingly in your hyp file): Line 6 in c1f8dd9
|
@glenn-jocher Understood, thanks! |
@danielcrane you're welcome! If you have any other questions, feel free to ask. |
When developing the training code I found that SGD caused divergence very quickly at the default LR of 1e-4. Loss terms began to grow exponentially, becoming Inf within about 10 batches of starting training.
Adam always seems to converge in contrast, which is why I use it as the default optimizer in
train.py
. I don't understand why Adam works and SGD does not, as darknet uses SGD successfully. This is one of the key differences between darknet and this repo, so any insights into how we can get SGD to converge would be appreciated.It might be that I simply don't have the proper learning rate (and scheduler) in place.
line 82 in
train.py
The text was updated successfully, but these errors were encountered: