Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

local variable 'beta1' referenced before assignment #186

Closed
kevindany opened this issue Mar 2, 2023 · 2 comments
Closed

local variable 'beta1' referenced before assignment #186

kevindany opened this issue Mar 2, 2023 · 2 comments

Comments

@kevindany
Copy link

When I run: python train.py config/train_shakespeare_char.py, it run into error as below, please help.

Below is what I print the optimizer:
AdamW (
Parameter Group 0
amsgrad: False
betas: (0.9, 0.99)
eps: 1e-08
lr: 0.001
weight_decay: 0.1

Parameter Group 1
amsgrad: False
betas: (0.9, 0.99)
eps: 1e-08
lr: 0.001
weight_decay: 0.0
)

----------------------- error message --------------------------
step 0: train loss 4.2874, val loss 4.2823
Traceback (most recent call last):
File "train.py", line 312, in
scaler.step(optimizer)
File "/home/xx/miniconda3/envs/py38/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py", line 311, in step
return optimizer.step(*args, **kwargs)
File "/home/xx/miniconda3/envs/py38/lib/python3.8/site-packages/torch/optim/optimizer.py", line 89, in wrapper
return func(*args, **kwargs)
File "/home/xx/miniconda3/envs/py38/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/xx/miniconda3/envs/py38/lib/python3.8/site-packages/torch/optim/adamw.py", line 117, in step
beta1,
UnboundLocalError: local variable 'beta1' referenced before assignment

@Nikolaj-K
Copy link

Nikolaj-K commented Mar 2, 2023

Note/Thoughts: Your error comes from deeper within torch packages. Googling shows this has been reported in other repos a few times, e.g. here, and I figure it might be a versions issue.
This may or may not explain why it pops up for you and not others.
Could be that the code won't hid that path if you tune some parameters, but that's just a guess and I wouldn't bet on it.

(I also had some version issues, or rather still have them. I found you currently can't run pip install on the ML related packages in the readme, mixing with your local packages and potentially upgrading the python version, and expect it to run out of the box. There might be some more work required to get the repo to that status. Possibly using a virtual environment is one way to go.)

@kevindany
Copy link
Author

right, I had figured it out, very appericate it!

klei22 pushed a commit to klei22/nanoGPT that referenced this issue Jun 16, 2024
gkielian added a commit to gkielian/ReaLLMASIC_nanogpt that referenced this issue Sep 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants