-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Are ZeRO CPU offload and gradient accumulation compatible? #671
Comments
I haven't tried that combination yet, and if I do I get the same error as you. Let me investigate to ensure it's not something missing on our side. |
Oh, OK, I didn't follow the instructions, so the problem is in my code: Try this patch:
|
Thanks! I'll test this later this afternoon |
huggingface/transformers#9622 should fix it, plus added test. |
Awesome! |
I'm trying out @stas00 's HuggingFace DeepSpeed integration and it's super cool!
But I'm running into an error when I try to enable both cpu offload and gradient accumulation at the same time, and I'm not sure if my problem is on the HuggingFace side, or the DeepSpeed side, or (most likely) between my chair and keyboard. Since this post is in the DeepSpeed project, I'll leave out the HuggingFace specifics for now.
My training script will run just fine with either
cpu_offload=true
or--gradient_accumulation_steps
> 1, but if I try using both, it throws the following:I'm assuming it's because I haven't configured DeepSpeed or my optimizer correctly. But before I dig too much deeper, I wanted to make sure that using both was supported. I haven't seen anything in the documentation that would indicate that it wasn't.
@stas00 have you tried both simultaneously in your HuggingFace integration testing?
This is my deepspeed config json:
The text was updated successfully, but these errors were encountered: