-
Notifications
You must be signed in to change notification settings - Fork 27.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trainer is always using IPEX, even when use_ipex=False #24871
Comments
cc @muellerzr (right?) |
This is a problem that should be solved in Accelerate, I'll work on a PR today with this. Thanks for the flag! Edit: actually this can be solved in the training args, PR coming shortly |
@dmsuehir can you try running again with |
@muellerzr Yes, the fix in your branch works. Thanks! |
@muellerzr By the way, I think |
System Info
transformers
version: 4.32.0.dev0Who can help?
@sgugger
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Steps to reproduce the behavior:
intel-extension-for-pytorch==2.0.100
installed in my environment and am running the following command to run_glue.py withoutuse_ipex
(so it should default toFalse
):98.191
samples/second.--use_ipex
. Note that I am also deleting my output directory between runs.train_samples_per_second
as step 1:use_ipex
arg, however, it appears that accelerate is always using IPEX if it's installed. Digging deeper into this, I found that accelerate would only not use IPEX ifACCELERATE_USE_IPEX
gets set to False/0. To confirm this, I manually setACCELERATE_USE_IPEX=0
and then ran the same script/args from step 1:train_samples_per_second
, which indicates that IPEX has actually been turned off now that the env var was used:Expected behavior
When
use_ipex
is not given or set toFalse
, IPEX optimize should not get called.If it's agreed that this is in fact a bug, I would be happy to work on a PR to fix it. I saw that other accelerate env vars are getting set from
training_args.py
.The text was updated successfully, but these errors were encountered: