-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About NVIDIA Jetson TX1/TX2/Nano/Xavier/Orin Builds #23113
Comments
Hi @mfatih7 thanks for the report. The line that cuda 12.x only available to orin is not precise. I will update it later. The background context for that line is, jetpack 5.x only supports up to tensorrt 8.5 (which works with cuda up to 11.8) Apart from that, the ORT 1.18 whl is okay to use if installing gcc11 on xavier |
Hello @yf711 Thank you for the answer. Using your ORT 1.18 .whl file, we are adding ORT to our python 3.8 environment and observe that some inference can be made successfully. Therefore we need to build ORT from source to have cpp libs. |
Can you provide a .whl file that is built with gcc9 and compatible with CUDA 12.2 Can onnxrunitme 1.16.3 be built with gcc9 and run with CUDA 12.2? |
please try https://github.com/yf711/onnxruntime-gpu-jetpack/tree/main/jetpack_5_cuda_12_2 |
I followed build command from https://onnxruntime.ai/docs/build/eps.html#nvidia-jetson-tx1tx2nanoxavierorin, which works on my jetson devices. You can try on this first and adjust build param based on your build.sh if needed |
### Description <!-- Describe your changes. --> * Add more detail to instructions and build tips Preview: https://yf711.github.io/onnxruntime/docs/build/eps.html#nvidia-jetson-tx1tx2nanoxavierorin ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. --> Per #23113 to make docs more accurate
Hello
In https://onnxruntime.ai/docs/build/eps.html#nvidia-jetson-tx1tx2nanoxavierorin
It is written that
CUDA 12.x is only available to Jetson Orin and newer series (CUDA compute capability >= 8.7)
When I look at
The following table shows the CUDA UMD and CUDA Toolkit version compatibility on NVIDIA JetPack 5.x release:
section in https://docs.nvidia.com/cuda/cuda-for-tegra-appnote/index.html#upgradable-package-for-jetsonI do not observe such a constraint.
Moreover when I try to install Cuda 11.8 on Nvdia Xavier Dev Board(compute capability 7.2) with Jetpack 5.1.2 with these lines
It automatically installs Cuda 12.2.
This is not consistent with the statement
CUDA 12.x is only available to Jetson Orin and newer series (CUDA compute capability >= 8.7)
When I explore https://elinux.org/Jetson_Zoo#ONNX_Runtime I observe that for Jetpack 5.1.2(Python 3.8) onnxruntime 1.18.0 is available.
But after installing the package from the .whl file into a python 3.8 venv we get Libraray errors at runtime regarding to the
GLIBCXX
versions.Is this .whl file OK. Or should we build onnxruntime from the source?
If we need to build from the source, the default gcc on Jetpack 5.1.2 is not enough; we need gcc 11.
I am unsure if using gcc11 on Jetpack 5.1.2 is safe.
The text was updated successfully, but these errors were encountered: