-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CUDA] Add an option for profiling cuda kernels #16061
Conversation
Several Nvidia tools such as Nsight Systems and Nsight Compute can be used for profiling cuda kernels. NVIDIA Nsight Systems collects system-wide information about your program and GPU events and might help you to understand possible bottlenecks in your topology. To profile concrete Cuda kernel, NVIDIA Nsight Compute can be used. If you try to profile cuda kernel from TVM with Nsight Compute without this patch, then you see only SASS instructions instead of the source code. It is useful, but sometimes it might be easier to analyze generated cuda code instead of instructions. In this patch, a new pass config option was added. By using option `cuda.kernels_output_dir`, you can specify the directory where cuda source code should be stored after the build. Also, in the case of using this option, cuda kernels will be compiled with option `-lineinfo` which is an equivalent of `-g` option in GCC. When the cuda kernels were compiled with `-lineinfo` option, then Nsight compute can map profile information to the source code. One important note, that to get the source code in Nsight Compute, you have to set parameter `Import Source` during profiling session configuration equals to `Yes`.
Here is an example how Nvidia tools can be used to analyze kernels in the models, compiled with TVM. I took the code which was used in autotvm x86 tutorial for running with tvm.transform.PassContext(opt_level=3, config={"cuda.kernels_output_dir": "___tmp_cuda_dumps"}):
lib = relay.build(mod, target=target, params=params) After running the script, directory If we want to profile our model, first we can use Nvidia Nsight Systems. Run our model with Nsight Systems profiler: nsys profile python3 model_run.py After executing this command, a new file You can zoom in the window with trace and expand row with After that you can start GUI interface of installed NVIDIA Nsight Compute or display a command line for Nsight Compute CLI. I prefer GUI interface because it gives you more tools for analysis. After selecting GUI interface, an instance of Nsight Compute will be opened. You should click on the
When the profiling was finished, you can see the detailed overview of your kernel. This page provides a lot of information about your kernel and hardware utilization: After switching to the You can see on the screenshot that code or our kernel is on line 1329. It is happened because TVM dumps kernels for whole network into a single file and file with sources contains all kernels from the model. But as you can see, near the scroll bar there is a colorized area. This is the area where the executed code is located. So we can easy find necessary kernel in this file. |
Here we can discuss current implementation and if we should split the file with all cuda kernels into a separate files or not. |
Several Nvidia tools such as Nsight Systems and Nsight Compute can be used for profiling cuda kernels. NVIDIA Nsight Systems collects system-wide information about your program and GPU events and might help you to understand possible bottlenecks in your topology. To profile concrete Cuda kernel, NVIDIA Nsight Compute can be used.
If you try to profile cuda kernel from TVM with Nsight Compute without this patch, then you see only SASS instructions instead of the source code. It is useful, but sometimes it might be easier to analyze generated cuda code instead of instructions. In this patch, a new pass config option was added. By using option
cuda.kernels_output_dir
, you can specify the directory where cuda source code should be stored after the build. Also, in the case of using this option, cuda kernels will be compiled with option-lineinfo
which is an equivalent of-g
option in GCC. When the cuda kernels were compiled with-lineinfo
option, then Nsight compute can map profile information to the source code. One important note, that to get the source code in Nsight Compute, you have to set parameterImport Source
during profiling session configuration equals toYes
.