Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

thread 'main' panicked at 'called Result::unwrap() on an Err value: Torch("Could not run 'aten::empty_strided' with arguments from the 'CUDA' backend. #2

Open
Cody-Duncan opened this issue Aug 6, 2023 · 9 comments

Comments

@Cody-Duncan
Copy link

Error Message:

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Torch("Could not run 'aten::empty_strided' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty_strided' is only available for these backends: [CPU, Meta, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].
CPU: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\build\\build\\aten\\src\\ATen\\RegisterCPU.cpp:31034 [kernel]\nMeta: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\build\\build\\aten\\src\\ATen\\RegisterMeta.cpp:26824 [kernel]\nQuantizedCPU: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\build\\build\\aten\\src\\ATen\\RegisterQuantizedCPU.cpp:929 [kernel]\nBackendSelect: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\build\\build\\aten\\src\\ATen\\RegisterBackendSelect.cpp:726 [kernel]\nPython: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\core\\PythonFallbackKernel.cpp:144 [backend fallback]\nFuncTorchDynamicLayerBackMode: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\functorch\\DynamicLayer.cpp:491 [backend fallback]\nFunctionalize: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\FunctionalizeFallbackKernel.cpp:280 [backend fallback]\nNamed: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\core\\NamedRegistrations.cpp:7 [backend fallback]\nConjugate: fallthrough registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\ConjugateFallback.cpp:21 [kernel]\nNegative: fallthrough registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\native\\NegateFallback.cpp:23 [kernel]\nZeroTensor: fallthrough registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\ZeroTensorFallback.cpp:90 [kernel]\nADInplaceOrView: fallthrough registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\core\\VariableFallbackKernel.cpp:63 [backend fallback]\nAutogradOther: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradCPU: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradCUDA: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradHIP: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradXLA: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradMPS: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradIPU: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradXPU: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradHPU: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradVE: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradLazy: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradMeta: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradMTIA: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradPrivateUse1: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradPrivateUse2: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradPrivateUse3: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradNestedTensor: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nTracer: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\TraceType_2.cpp:16726 [kernel]\nAutocastCPU: fallthrough registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\autocast_mode.cpp:487 [backend fallback]\nAutocastCUDA: fallthrough registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\autocast_mode.cpp:354 [backend fallback]\nFuncTorchBatched: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\functorch\\LegacyBatchingRegistrations.cpp:815 [backend fallback]\nFuncTorchVmapMode: fallthrough registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\functorch\\VmapModeRegistrations.cpp:28 [backend fallback]\nBatched: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\LegacyBatchingRegistrations.cpp:1073 [backend fallback]\nVmapMode: fallthrough registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\VmapModeRegistrations.cpp:33 [backend fallback]\nFuncTorchGradWrapper: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\functorch\\TensorWrapper.cpp:210 [backend fallback]\nPythonTLSSnapshot: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\core\\PythonFallbackKernel.cpp:152 [backend fallback]\nFuncTorchDynamicLayerFrontMode: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\functorch\\DynamicLayer.cpp:487 [backend fallback]\nPythonDispatcher: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\core\\PythonFallbackKernel.cpp:148 [backend fallback]\n\nException raised from reportError at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\core\\dispatch\\OperatorEntry.cpp:548 (most recent call first):\n00007FFD716ED24200007FFD716ED1E0 c10.dll!c10::Error::Error [<unknown file> @ <unknown line number>]\n00007FFD716B481500007FFD716B47A0 c10.dll!c10::NotImplementedError::NotImplementedError [<unknown file> @ <unknown line number>]\n00007FFD1585640800007FFD15856220 torch_cpu.dll!c10::impl::OperatorEntry::reportError [<unknown file> @ <unknown line number>]\n00007FFD1569607400007FFD15696020 torch_cpu.dll!c10::impl::OperatorEntry::lookup [<unknown file> @ <unknown line number>]\n00007FFD161AD9EA00007FFD1613C8B0 torch_cpu.dll!at::_ops::xlogy__Tensor::redispatch [<unknown file> @ <unknown line number>]\n00007FFD1626F90E00007FFD1626F820 torch_cpu.dll!at::_ops::empty_strided::redispatch [<unknown file> @ <unknown line number>]\n00007FFD164B61FF00007FFD1649ABA0 torch_cpu.dll!at::_ops::view_as_real::redispatch [<unknown file> @ <unknown line number>]\n00007FFD164B368800007FFD1649ABA0 torch_cpu.dll!at::_ops::view_as_real::redispatch [<unknown file> @ <unknown line number>]\n00007FFD15E7644300007FFD15E56E40 torch_cpu.dll!at::TensorMaker::make_tensor [<unknown file> @ <unknown line number>]\n00007FFD161EA0EE00007FFD161E9EC0 torch_cpu.dll!at::_ops::empty_strided::call [<unknown file> @ <unknown line number>]\n00007FFD1570BC7600007FFD1570B790 torch_cpu.dll!at::TensorIteratorConfig::declare_static_shape [<unknown file> @ <unknown line number>]\n00007FFD15B7B10E00007FFD15B7AC70 torch_cpu.dll!at::native::_to_copy [<unknown file> @ <unknown line number>]\n00007FFD1670F38C00007FFD1670E0B0 torch_cpu.dll!at::compositeexplicitautograd::view_copy_symint_outf [<unknown file> @ <unknown line number>]\n00007FFD166EBA8200007FFD166A8730 torch_cpu.dll!at::compositeexplicitautograd::bucketize_outf [<unknown file> @ <unknown line number>]\n00007FFD15E7607800007FFD15E56E40 torch_cpu.dll!at::TensorMaker::make_tensor [<unknown file> @ <unknown line number>]\n00007FFD15EF598200007FFD15E56E40 torch_cpu.dll!at::TensorMaker::make_tensor [<unknown file> @ <unknown line number>]\n00007FFD15FB454C00007FFD15FB4470 torch_cpu.dll!at::_ops::_to_copy::redispatch [<unknown file> @ <unknown line number>]\n00007FFD164AAA5200007FFD1649ABA0 torch_cpu.dll!at::_ops::view_as_real::redispatch [<unknown file> @ <unknown line number>]\n00007FFD164B301200007FFD1649ABA0 torch_cpu.dll!at::_ops::view_as_real::redispatch [<unknown file> @ <unknown line number>]\n00007FFD15E7607800007FFD15E56E40 torch_cpu.dll!at::TensorMaker::make_tensor [<unknown file> @ <unknown line number>]\n00007FFD15EF598200007FFD15E56E40 torch_cpu.dll!at::TensorMaker::make_tensor [<unknown file> @ <unknown line number>]\n00007FFD15FB454C00007FFD15FB4470 torch_cpu.dll!at::_ops::_to_copy::redispatch [<unknown file> @ <unknown line number>]\n00007FFD174D514100007FFD174AC610 torch_cpu.dll!torch::autograd::NotImplemented::~NotImplemented [<unknown file> @ <unknown line number>]\n00007FFD174FB66100007FFD174DE8E0 torch_cpu.dll!torch::autograd::GraphRoot::apply [<unknown file> @ <unknown line number>]\n00007FFD15E7607800007FFD15E56E40 torch_cpu.dll!at::TensorMaker::make_tensor [<unknown file> @ <unknown line number>]\n00007FFD15F28B6D00007FFD15F28920 torch_cpu.dll!at::_ops::_to_copy::call [<unknown file> @ <unknown line number>]\n00007FFD15B8160000007FFD15B810D0 torch_cpu.dll!at::native::to_dense_backward [<unknown file> @ <unknown line number>]\n00007FFD15B80ED900007FFD15B80DB0 torch_cpu.dll!at::native::to [<unknown file> @ <unknown line number>]\n00007FFD168C0D1800007FFD168BA880 torch_cpu.dll!at::compositeimplicitautograd::where [<unknown file> @ <unknown line number>]\n00007FFD168A9F1E00007FFD16860BE0 torch_cpu.dll!at::compositeimplicitautograd::broadcast_to_symint [<unknown file> @ <unknown line number>]\n00007FFD15FF9E7700007FFD15FE7940 torch_cpu.dll!at::_ops::zeros_out::redispatch [<unknown file> @ <unknown line number>]\n00007FFD160DA74600007FFD160DA4D0 torch_cpu.dll!at::_ops::to_dtype_layout::call [<unknown file> @ <unknown line number>]\n00007FFD1566335F00007FFD156631B0 torch_cpu.dll!at::Tensor::to [<unknown file> @ <unknown line number>]\n00007FF7E778D95000007FF7E778D8C0 sample.exe!atg_to [<unknown file> @ <unknown line number>]\n00007FF7E75FAE5C00007FF7E75FAE00 sample.exe!ZN3tch8wrappers16tensor_generated47_$LT$impl$u20$tch..wrappers..tensor..Tensor$GT$2to17h95f342b9efe0686bE [<unknown file> @ <unknown line number>]\n00007FF7E758375B00007FF7E7583720 sample.exe!ZN8burn_tch3ops10int_tensor165_$LT$impl$u20$burn_tensor..tensor..ops..int_tensor..IntTensorOps$LT$burn_tch..backend..TchBackend$LT$E$GT$$GT$$u20$for$u20$burn_tch..backend..TchBackend$LT$E$GT$$GT$13int_to_device17he788951f2ea20c64E [<unknown file> @ <unknown line number>]\n00007FF7E756C1D300007FF7E756C160 sample.exe!ZN126_$LT$stablediffusion..model..stablediffusion..StableDiffusion$LT$B$GT$$u20$as$u20$burn_core..module..base..Module$LT$B$GT$$GT$3map17h756fb4e94c93959aE [<unknown file> @ <unknown line number>]\n00007FF7E75CCA0C00007FF7E75CC5F0 sample.exe!ZN8burn_tch3ops4base15TchOps$LT$E$GT$8mean_dim17hf16dedd15676e654E [<unknown file> @ <unknown line number>]\n00007FF7E758FF8600007FF7E758FF80 sample.exe!ZN3std10sys_common9backtrace28__rust_begin_short_backtrace17h09841e2f04a0c80cE [<unknown file> @ <unknown line number>]\n00007FF7E75CE5AC00007FF7E75CE5A0 sample.exe!ZN3std2rt10lang_start28_$u7b$$u7b$closure$u7d$$u7d$17hdc39dbb179ca69ffE.llvm.15969601819475355906 [<unknown file> @ <unknown line number>]\n00007FF7E775AAA800007FF7E775A9F0 sample.exe!std::rt::lang_start_internal [/rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library\\std\\src\\rt.rs @ 148]\n00007FF7E75CD7AC00007FF7E75CD780 sample.exe!main [<unknown file> @ <unknown line number>]\n00007FF7E77D801000007FF7E77D7F04 sample.exe!__scrt_common_main_seh [D:\\a\\_work\\1\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_common.inl @ 288]\n00007FFDB39226AD00007FFDB3922690 KERNEL32.DLL!BaseThreadInitThunk [<unknown file> @ <unknown line number>]\n00007FFDB4A2AA6800007FFDB4A2AA40 ntdll.dll!RtlUserThreadStart [<unknown file> @ <unknown line number>]\n")', 
D:\rust\.cargo\registry\src\index.crates.io-6f17d22bba15001f\tch-0.13.0\src\wrappers\tensor_generated.rs:17243:27
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
error: process didn't exit successfully: `target\release\sample.exe burn SDv1-4 7.5 20 "An ancient mossy stone." img` (exit code: 101)

Following the instructions in README.md

  1. downladed the .bin file
  2. set the environment variable
$env:TORCH_CUDA_VERSION='cu113'
  1. Ran the sample command
cargo run --release --bin sample burn SDv1-4 7.5 20 "An ancient mossy stone." img
  1. Encountered the error message above.
    I might be missing some dependency, but I would need guidance to understand what it is.

OS: Windows 11
CPU: AMD Ryzen 9 7950X
GPU: Nvidia GeForce RTX 4090
RAM: 64GB

@Cody-Duncan
Copy link
Author

stack backtrace:
   0: std::panicking::begin_panic_handler
             at /rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library\std\src\panicking.rs:593
   1: core::panicking::panic_fmt
             at /rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library\core\src\panicking.rs:67
   2: core::result::unwrap_failed
             at /rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library\core\src\result.rs:1651
   3: tch::wrappers::tensor_generated::<impl tch::wrappers::tensor::Tensor>::to
   4: burn_tch::ops::int_tensor::<impl burn_tensor::tensor::ops::int_tensor::IntTensorOps<burn_tch::backend::TchBackend<E>> for burn_tch::backend::TchBackend<E>>::int_to_device
   5: <stablediffusion::model::stablediffusion::StableDiffusion<B> as burn_core::module::base::Module<B>>::map
   6: burn_tch::ops::base::TchOps<E>::mean_dim
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

@antimora
Copy link

antimora commented Aug 6, 2023

Can you if the cuda version installed matches the environment variable set?

This is what you set TORCH_CUDA_VERSION='cu113'

To check:
nvcc --version

@Cody-Duncan
Copy link
Author

Cody-Duncan commented Aug 7, 2023

nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Jul_11_03:10:21_Pacific_Daylight_Time_2023
Cuda compilation tools, release 12.2, V12.2.128
Build cuda_12.2.r12.2/compiler.33053471_0

I don't know how to tell what to use for the value from this output.

I used exactly $env:TORCH_CUDA_VERSION='cu113' in powershell. Using cu113 without some quotes is a syntax error in powershell. I assumed that other shells probably interpret that as a string.

@antimora
Copy link

antimora commented Aug 7, 2023

@Cody-Duncan that means you're using cu122 (12.2). You'd set:

$env:TORCH_CUDA_VERSION='cu122'

However, please note CUDA 11.7 and CUDA 11.8 are supported for stable version of Torch. and CUDA 12.1 for nightly version but tch-rs supports the stable versions

If you decide to downgrade to CUDA 11.7, you'll set your env variable to: $env:TORCH_CUDA_VERSION='cu117'

@Cody-Duncan
Copy link
Author

For the record, I tried $env:TORCH_CUDA_VERSION='cu122' and it failed to build.

error: failed to run custom build command for `torch-sys v0.13.0`

Caused by:
  process didn't exit successfully: `D:\repo\stable_diffusion\stable-diffusion-burn\target\release\build\torch-sys-befb20122f33af78\build-script-build` (exit code: 1)
  --- stdout
  cargo:rerun-if-env-changed=LIBTORCH_USE_PYTORCH
  cargo:rerun-if-env-changed=LIBTORCH
  cargo:rerun-if-env-changed=TORCH_CUDA_VERSION

  --- stderr
  Error: https://download.pytorch.org/libtorch/cu122/libtorch-win-shared-with-deps-2.0.0.zip: status code 403

@Cody-Duncan
Copy link
Author

Cody-Duncan commented Aug 7, 2023

(Note that SDv1-4.bin has been downloaded and placed in the project directory already)

Downloaded and Installed Cuda Toolkit 11.8 from https://developer.nvidia.com/cuda-11-8-0-download-archive.

Set environment variables to match:

$env:CUDA_PATH = 'D:\apps\NVIDIA GPU Computing Toolkit\CUDA\v11.8'
$env:CUDA_PATH_V11_8 = 'D:\apps\NVIDIA GPU Computing Toolkit\CUDA\v11.8'
$env:Path += ';D:\apps\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin;D:\apps\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin'

Installed LibTorch binaries
Download https://download.pytorch.org/libtorch/cu118/libtorch-win-shared-with-deps-2.0.1%2Bcu118.zip
Unzip to D:\repo\stable_diffusion\deps\libtorch
Set environment variables:

$env:LIBTORCH = 'D:\repo\stable_diffusion\deps\libtorch'
$env:Path += 'D:\repo\stable_diffusion\deps\libtorch\lib'

Set the Torch Cuda Version environment variable

$Env:TORCH_CUDA_VERSION = "cu118"

Run the sample command

cargo run --release --bin sample burn SDv1-4 7.5 20 "An ancient mossy stone." img

And it produces two image outputs:
img0.png
img0
img1.png
img1

It seems that installing the compatible CUDA toolkit (11.8) and the PyTorch Binaries has worked.
My problem is resolved. These setup requirements should be called out in the README.md.

@Cody-Duncan
Copy link
Author

Cody-Duncan commented Aug 7, 2023

For anyone wondering about speed

Measure-Command -Expression { cargo run --release --bin sample burn SDv1-4 7.5 20 "An ancient mossy stone." img } | Select-Object TotalMilliseconds

TotalMilliseconds
-----------------
       13209.8555 (13.209 seconds)

OS: Windows 11
CPU: AMD Ryzen 9 7950X (running at approx. 5.45 GHz)
GPU: Nvidia GeForce RTX 4090 (running at approx. 2800 MHz)
RAM: 64GB

@Gadersd
Copy link
Owner

Gadersd commented Aug 7, 2023

It should be noted that the time measure includes the model loading period. When I have more time I'll see about measuring the inference speed compared to the python version.

@Cody-Duncan
Copy link
Author

Cody-Duncan commented Aug 7, 2023

I'd call this issue resolved if there was an issue filed (not completed) for Comprehensive Documentation Setup and Troubleshooting. Until that's road-mapped for resolution, this issue will be recurrent.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants