Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About NVIDIA Jetson TX1/TX2/Nano/Xavier/Orin Builds #23113

Open
mfatih7 opened this issue Dec 15, 2024 · 5 comments
Open

About NVIDIA Jetson TX1/TX2/Nano/Xavier/Orin Builds #23113

mfatih7 opened this issue Dec 15, 2024 · 5 comments
Assignees
Labels
platform:jetson issues related to the NVIDIA Jetson platform

Comments

@mfatih7
Copy link

mfatih7 commented Dec 15, 2024

Hello

In https://onnxruntime.ai/docs/build/eps.html#nvidia-jetson-tx1tx2nanoxavierorin

It is written that CUDA 12.x is only available to Jetson Orin and newer series (CUDA compute capability >= 8.7)

When I look at The following table shows the CUDA UMD and CUDA Toolkit version compatibility on NVIDIA JetPack 5.x release: section in https://docs.nvidia.com/cuda/cuda-for-tegra-appnote/index.html#upgradable-package-for-jetson

I do not observe such a constraint.

Moreover when I try to install Cuda 11.8 on Nvdia Xavier Dev Board(compute capability 7.2) with Jetpack 5.1.2 with these lines

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/arm64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update
sudo apt-get -y install cuda

It automatically installs Cuda 12.2.
This is not consistent with the statement CUDA 12.x is only available to Jetson Orin and newer series (CUDA compute capability >= 8.7)

When I explore https://elinux.org/Jetson_Zoo#ONNX_Runtime I observe that for Jetpack 5.1.2(Python 3.8) onnxruntime 1.18.0 is available.
But after installing the package from the .whl file into a python 3.8 venv we get Libraray errors at runtime regarding to the GLIBCXX versions.
Is this .whl file OK. Or should we build onnxruntime from the source?

If we need to build from the source, the default gcc on Jetpack 5.1.2 is not enough; we need gcc 11.
I am unsure if using gcc11 on Jetpack 5.1.2 is safe.

@github-actions github-actions bot added the platform:jetson issues related to the NVIDIA Jetson platform label Dec 15, 2024
@yf711
Copy link
Contributor

yf711 commented Dec 23, 2024

Hi @mfatih7 thanks for the report. The line that cuda 12.x only available to orin is not precise. I will update it later.

The background context for that line is, jetpack 5.x only supports up to tensorrt 8.5 (which works with cuda up to 11.8)
Jetpack 6.x equip tensorrt 8.6~10.3 and cuda 12.x
In order to build ONNXRuntime with tensorrt and cuda 12.x, we need to install jetpack 6
Unfortunately, jetson xavier only supports up to jetpack 5.1.4 (which still uses tensorrt 8.5)

Apart from that, the ORT 1.18 whl is okay to use if installing gcc11 on xavier
By default, jetpack 5.x use gcc9.4. On ORT 1.17 there was an update that required higher gcc to compile, and that's why gcc11 is being used.

@mfatih7
Copy link
Author

mfatih7 commented Dec 24, 2024

Hello @yf711

Thank you for the answer.

Using your ORT 1.18 .whl file, we are adding ORT to our python 3.8 environment and observe that some inference can be made successfully.
But we also need cpp libraries of ORT to make the inference in cpp.
Your ORT 1.18 .whl file file does not provide cpp libraries.

Therefore we need to build ORT from source to have cpp libs.
Is it OK to use the build.sh here?
Do we need to update the system tensorrt before building ORT?
Because we are using gcc11 and Cuda12.2 on Jetpack 5.1.2 but the original system tensorrt coming with Jetpack 5.1.2 has not been updated during Cuda11.4 to Cuda12.2 update.
The build.sh file includes options below:
--use_tensorrt --tensorrt_home /usr/lib/$(uname -m)

@mfatih7
Copy link
Author

mfatih7 commented Dec 28, 2024

@yf711

Can you provide a .whl file that is built with gcc9 and compatible with CUDA 12.2
Also, another .whl built with gcc9 and compatible with 11.4 can be useful.
In response to your comment, we must go below onnxruntime 1.17.

Can onnxrunitme 1.16.3 be built with gcc9 and run with CUDA 12.2?

@yf711
Copy link
Contributor

yf711 commented Jan 4, 2025

@yf711

Can you provide a .whl file that is built with gcc9 and compatible with CUDA 12.2 Also, another .whl built with gcc9 and compatible with 11.4 can be useful. In response to your comment, we must go below onnxruntime 1.17.

Can onnxrunitme 1.16.3 be built with gcc9 and run with CUDA 12.2?

please try https://github.com/yf711/onnxruntime-gpu-jetpack/tree/main/jetpack_5_cuda_12_2

@yf711
Copy link
Contributor

yf711 commented Jan 6, 2025

Hello @yf711

Thank you for the answer.

Using your ORT 1.18 .whl file, we are adding ORT to our python 3.8 environment and observe that some inference can be made successfully. But we also need cpp libraries of ORT to make the inference in cpp. Your ORT 1.18 .whl file file does not provide cpp libraries.

Therefore we need to build ORT from source to have cpp libs. Is it OK to use the build.sh here? Do we need to update the system tensorrt before building ORT? Because we are using gcc11 and Cuda12.2 on Jetpack 5.1.2 but the original system tensorrt coming with Jetpack 5.1.2 has not been updated during Cuda11.4 to Cuda12.2 update. The build.sh file includes options below: --use_tensorrt --tensorrt_home /usr/lib/$(uname -m)

I followed build command from https://onnxruntime.ai/docs/build/eps.html#nvidia-jetson-tx1tx2nanoxavierorin, which works on my jetson devices. You can try on this first and adjust build param based on your build.sh if needed

yf711 added a commit that referenced this issue Jan 8, 2025
### Description
<!-- Describe your changes. -->
* Add more detail to instructions and build tips

Preview:
https://yf711.github.io/onnxruntime/docs/build/eps.html#nvidia-jetson-tx1tx2nanoxavierorin


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Per #23113 to make docs
more accurate
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
platform:jetson issues related to the NVIDIA Jetson platform
Projects
None yet
Development

No branches or pull requests

2 participants