-
Notifications
You must be signed in to change notification settings - Fork 6.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
local variable 'beta1' referenced before assignment #186
Comments
Note/Thoughts: Your error comes from deeper within (I also had some version issues, or rather still have them. I found you currently can't run |
right, I had figured it out, very appericate it! |
Add vizier optimization
Add vizier optimization
When I run: python train.py config/train_shakespeare_char.py, it run into error as below, please help.
Below is what I print the optimizer:
AdamW (
Parameter Group 0
amsgrad: False
betas: (0.9, 0.99)
eps: 1e-08
lr: 0.001
weight_decay: 0.1
Parameter Group 1
amsgrad: False
betas: (0.9, 0.99)
eps: 1e-08
lr: 0.001
weight_decay: 0.0
)
----------------------- error message --------------------------
step 0: train loss 4.2874, val loss 4.2823
Traceback (most recent call last):
File "train.py", line 312, in
scaler.step(optimizer)
File "/home/xx/miniconda3/envs/py38/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py", line 311, in step
return optimizer.step(*args, **kwargs)
File "/home/xx/miniconda3/envs/py38/lib/python3.8/site-packages/torch/optim/optimizer.py", line 89, in wrapper
return func(*args, **kwargs)
File "/home/xx/miniconda3/envs/py38/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/xx/miniconda3/envs/py38/lib/python3.8/site-packages/torch/optim/adamw.py", line 117, in step
beta1,
UnboundLocalError: local variable 'beta1' referenced before assignment
The text was updated successfully, but these errors were encountered: