-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
thread 'main' panicked at 'called Result::unwrap()
on an Err
value: Torch("Could not run 'aten::empty_strided' with arguments from the 'CUDA' backend.
#2
Comments
|
Can you if the cuda version installed matches the environment variable set? This is what you set TORCH_CUDA_VERSION='cu113' To check: |
I don't know how to tell what to use for the value from this output. I used exactly |
@Cody-Duncan that means you're using cu122 (12.2). You'd set:
However, please note CUDA 11.7 and CUDA 11.8 are supported for stable version of Torch. and CUDA 12.1 for nightly version but tch-rs supports the stable versions If you decide to downgrade to CUDA 11.7, you'll set your env variable to: |
For the record, I tried
|
(Note that SDv1-4.bin has been downloaded and placed in the project directory already) Downloaded and Installed Cuda Toolkit 11.8 from https://developer.nvidia.com/cuda-11-8-0-download-archive. Set environment variables to match: $env:CUDA_PATH = 'D:\apps\NVIDIA GPU Computing Toolkit\CUDA\v11.8'
$env:CUDA_PATH_V11_8 = 'D:\apps\NVIDIA GPU Computing Toolkit\CUDA\v11.8'
$env:Path += ';D:\apps\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin;D:\apps\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin' Installed LibTorch binaries $env:LIBTORCH = 'D:\repo\stable_diffusion\deps\libtorch'
$env:Path += 'D:\repo\stable_diffusion\deps\libtorch\lib' Set the Torch Cuda Version environment variable $Env:TORCH_CUDA_VERSION = "cu118" Run the sample command cargo run --release --bin sample burn SDv1-4 7.5 20 "An ancient mossy stone." img And it produces two image outputs: It seems that installing the compatible CUDA toolkit (11.8) and the PyTorch Binaries has worked. |
For anyone wondering about speed Measure-Command -Expression { cargo run --release --bin sample burn SDv1-4 7.5 20 "An ancient mossy stone." img } | Select-Object TotalMilliseconds
TotalMilliseconds
-----------------
13209.8555 (13.209 seconds) OS: Windows 11 |
It should be noted that the time measure includes the model loading period. When I have more time I'll see about measuring the inference speed compared to the python version. |
I'd call this issue resolved if there was an issue filed (not completed) for Comprehensive Documentation Setup and Troubleshooting. Until that's road-mapped for resolution, this issue will be recurrent. |
Error Message:
Following the instructions in README.md
I might be missing some dependency, but I would need guidance to understand what it is.
OS: Windows 11
CPU: AMD Ryzen 9 7950X
GPU: Nvidia GeForce RTX 4090
RAM: 64GB
The text was updated successfully, but these errors were encountered: