Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to use onnxruntime-gpu package when multiple versions of CUDA are installed #6433

Closed
ivanst0 opened this issue Jan 25, 2021 · 0 comments · Fixed by #6436
Closed

Unable to use onnxruntime-gpu package when multiple versions of CUDA are installed #6433

ivanst0 opened this issue Jan 25, 2021 · 0 comments · Fixed by #6436

Comments

@ivanst0
Copy link
Member

ivanst0 commented Jan 25, 2021

Describe the bug
PATH and CUDA_PATH environment variables point to the most recently installed version of CUDA (usually 11.x), while onnxruntime-gpu package from PyPI requires CUDA 10.2. The only workaround is to manually set PATH/CUDA_PATH environment variable before using this package (and restore it afterwards).

Having to configure an environment variable before/after each use of onnxruntime-gpu is quite inconvenient. Dev machines very often have multiple versions of CUDA installed.

System information

  • OS Platform and Distribution: Windows 10
  • ONNX Runtime installed from: PyPI
  • ONNX Runtime version: 1.6.0
  • Python version: 3.6, 3.7, 3.8, 3.9
  • CUDA/cuDNN version: multiple

To Reproduce

  1. Install CUDA Toolkit 10.2.
  2. Install CUDA Toolkit 11.1.
  3. Install onnxruntime-gpu package.
  4. Try to import onnxruntime.

An ImportError is reported.

Additional context
onnxruntime-gpu package for Python 3.6 and 3.7 relies on PATH environment variable (the first directory in PATH containing the target DLL) to locate CUDA/cuDNN DLLs, while with Python 3.8 and 3.9 it uses CUDA_PATH variable for the same purpose.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants