-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Nuget Packge Pipline to CUDA11.4 and TensorRT8 on Windows #9000
Conversation
are there any other windows pipelines still using 11.1 ? please check all get updated to 11.4 |
@@ -1,2 +1,2 @@ | |||
set PATH=C:\azcopy;C:\local\TensorRT-8.0.1.6.Windows10.x86_64.cuda-11.3.cudnn8.2\lib;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1\bin;C:\local\cudnn-11.4-windows-x64-v8.2.2.26\cuda\bin;%PATH% | |||
set PATH=C:\azcopy;C:\local\TensorRT-8.0.3.4.Windows10.x86_64.cuda-11.3.cudnn8.2\lib;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.4\bin;C:\local\cudnn-11.4-windows-x64-v8.2.2.26\cuda\bin;%PATH% |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please help remove cudnn dir from this file and setup_env_trt.bat. Because these files have been copied to CUDA installation dir.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, will do it later. Right now we want to let CI run and see whether it can pass all or not.
@@ -87,30 +87,30 @@ jobs: | |||
|
|||
- template: templates/win-ci.yml | |||
parameters: | |||
ort_build_pool_name: 'onnxruntime-gpu-winbuild' | |||
ort_build_pool_name: 'onnxruntime-gpu-tensorrt8-winbuild-t4' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we write a comment about why this pool is being used (when using 11.4 some tests are failing on m60 used by the old pool)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'll add one later. we don't want to reset the CI tests before they finish.
* Update to CUDA11.4 and TensorRT-8.0.3.4 * update trt pool, remove cudnn from setup_env_gpu.bat * revert pool * test gpu package pipeline on t4 * back out changes * back out changes Co-authored-by: George Wu <[email protected]>
* fast reduction for reducemean (#8976) * Adding preprocessor checks for torch version during torch cpp extensions compilation (#8989) * custom autograd func memory refinement (#8993) * Release torch tensor referenced by torch gradient graph (created in PythonOp) * Update orttraining/orttraining/python/training/ortmodule/torch_cpp_extensions/torch_interop_utils/torch_interop_utils.cc * refine with comments Co-authored-by: Wei-Sheng Chin <[email protected]> * Fix issues in TensorRT EP (#8996) * fix big engine load issue and add cuda_cpu_alloc * remove redundancy * fix minor issues * [js/web] fix karma launch with chrome headless (#8998) * Update Nuget Packge Pipline to CUDA11.4 and TensorRT8 on Windows (#9000) * Update to CUDA11.4 and TensorRT-8.0.3.4 * update trt pool, remove cudnn from setup_env_gpu.bat * revert pool * test gpu package pipeline on t4 * back out changes * back out changes Co-authored-by: George Wu <[email protected]> * Fix fuzz testing build blocking release. (#9008) * add model local function support (#8540) * updates for picking pnnx commit * add tests filter to c# tests * plus test fixes * fix versioning for contrib ops * fix tests * test filter for optional ops * more versioning related updates * fix test * fix layernorm spec * more updates * update docs * add more test filters * more filters * update binary size threshold * update docs * draft - enable model local function * enable model local functions in ORT * update to latest rel onnx commit * plus tests * plus more updates * plus updates * test updates * Fix for nested functions + shape inference * plus bug fix and updates per review * plus fixes per review * plus test updates * plus updates per review * plus fixes * fix a test Co-authored-by: Vincent Wang <[email protected]> Co-authored-by: baijumeswani <[email protected]> Co-authored-by: pengwa <[email protected]> Co-authored-by: Wei-Sheng Chin <[email protected]> Co-authored-by: stevenlix <[email protected]> Co-authored-by: Yulong Wang <[email protected]> Co-authored-by: Chi Lo <[email protected]> Co-authored-by: George Wu <[email protected]> Co-authored-by: Pranav Sharma <[email protected]> Co-authored-by: Ashwini Khade <[email protected]>
We tested the updated TensorRT-8.0.3.4 with fix for the CUDA 11.4 issue and validated it works. So we will update the updated TensorRT8 and CUDA 11.4 for Windows GPU Zip/Nuget/Tarball package CI.