Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run filters using CUDA #2032

Closed
mvanlobensels opened this issue Aug 10, 2021 · 3 comments
Closed

Run filters using CUDA #2032

mvanlobensels opened this issue Aug 10, 2021 · 3 comments
Labels

Comments

@mvanlobensels
Copy link

Hi,

When using the filters spatial, temporal and pointcloud, the GPU usage does not seem to go up significantly, implying that the GPU is not used. Is it possible to run these filters on the GPU with CUDA? I am using an Nvidia Jetson Nano

Thank you.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 10, 2021

Hi @mvanlobensels Which method did you use to install the librealsense SDK and the RealSense ROS wrapper, please?

If you build librealsense from source code with CMake using Intel's official Jetson installation instructions in the link below then you can add the term -DBUILD_WITH_CUDA=true to the CMake build instruction to activate support in librealsense for CUDA

https://dev.intelrealsense.com/docs/nvidia-jetson-tx2-installation

CUDA specifically provides acceleration for pointclouds, depth-color alignment and color conversion, not for all types of processing.

@mvanlobensels
Copy link
Author

Hi @MartyG-RealSense

I have installed librealsense using apt, which according to this link should include CUDA support. RealSense ROS was installed from source.

I ran the following commands to start two RealSense D415's:
roslaunch realsense2_camera rs_camera.launch camera:=camera1 serial_no:=822512060091 filters:=spatial,temporal,pointcloud depth_width:=640 depth_height:=480 depth_fps:=15 color_width:=640 color_height:=480 color_fps:=15
roslaunch realsense2_camera rs_camera.launch camera:=camera2 serial_no:=822512060429 filters:=spatial,temporal,pointcloud depth_width:=640 depth_height:=480 depth_fps:=15 color_width:=640 color_height:=480 color_fps:=15

Then tegrastats showed the following. Here GR3D_FREQ refers to the GPU usage:
image

The CPU usage seems very high whereas the GPU usage fluctuates a lot. We are therefore unsure if the filters are run using CUDA. Thank you for the heads up that not all filters are supported by CUDA.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 10, 2021

Thanks very much for the confirmation of your installation method.

Post-processing filtering is calculated on the CPU rather than the camera hardware, which can add a processing burden. This may account for your high CPU usage. My understanding is that the Spatial filter is heavier than others whilst not necessarily providing much improvement. So it may be worth removing the Spatial filter from the launch instruction to see how it improves your CPU figures.

Are you also launching the two cameras with rs_camera.launch in separate ROS terminals, please? If you are using both in the same terminal then the rs_multiple_devices.launch method may work better for you for a same-terminal multicam launch, enabling you to define the serial numbers for both cameras in one instruction.

https://github.com/IntelRealSense/realsense-ros#work-with-multiple-cameras

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants