Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

D415 with high CPU usage on Jetson NANO #9236

Closed
103061634 opened this issue Jun 16, 2021 · 8 comments
Closed

D415 with high CPU usage on Jetson NANO #9236

103061634 opened this issue Jun 16, 2021 · 8 comments

Comments

@103061634
Copy link

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model { D400 }
Firmware Version 05.12.14.50
Operating System & Version 18.04
Kernel Version (Linux Only) 4.9.140-tegra
Platform NVIDIA Jetson NANO
SDK Version 2.
Language python
Segment others

Issue Description

We observed high CPU usage when we just use python to capture depth(640x480 30fps) and color frame(640x480 30fps). But I think realsense should not compute something on CPU.
What is the cause of this, or any settings need to adjust. Any suggestions?
Thank you for your help.

image

Here is our cmake command:

cmake .. -DBUILD_EXAMPLES=true
-DCMAKE_BUILD_TYPE=release
-DFORCE_RSUSB_BACKEND=false
-DBUILD_WITH_CUDA=true
-DBUILD_PYTHON_BINDINGS:bool=true
-DPYTHON_EXECUTABLE=/usr/bin/python3
-BUILD_WITH_OPENMP=true
&& make -j$(($(nproc)-1)) && sudo make install

Here is our python script:

import pyrealsense2 as rs
import numpy as np
import cv2

pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)


pipe_profile = pipeline.start(config)

try:
    while True:
        frames = pipeline.wait_for_frames()
        depth_frame = frames.get_depth_frame()
        color_frame = frames.get_color_frame()
        if not depth_frame or not color_frame:
            continue
        depth_image = np.asanyarray(depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())
finally:
    pipeline.stop()
@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 16, 2021

Setting BUILD_WITH_OPENMP to True to take advantage of multiple cores when utilizing the librealsense functions listed in the image below can result in greater CPU usage, as described in the documentation for CMake build flags in this link:

https://dev.intelrealsense.com/docs/build-configuration

image

@103061634
Copy link
Author

Thx for your response, we already make with -BUILD_WITH_OPENMP=true.
Is this( CPU usage) look like something wrongs?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 17, 2021

What the description about BUILD_WITH_OPENMP means is that if you have it set to True then that is what can cause high CPU usage. So it would be an expected behaviour when including BUILD_WITH_OPENMP=true in your CMake build instruction.

@103061634
Copy link
Author

We appreciate you for your help, it is work!!!
image
Is there have another reduce CPU usage measure?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 17, 2021

The RealSense SDK supports GLSL, an alternative method of offloading work from the CPU to the GPU. It is 'vendor neutral', meaning that it should work with any brand of video chip. The improvement may not be noticable on low-power devices though, and there is not much information about using it with Python instead of C++.

#3654
https://github.com/IntelRealSense/librealsense/tree/master/examples/gl

Advice is given in a Jetson Nano case in the link below that using GLSL may not be faster than CUDA anyway.

#7824 (comment)

So using -DBUILD_WITH_CUDA=true and not setting -BUILD_WITH_OPENMP to true may provide the best results.

The discussion linked to above does provide a comment from the RealSense user in that case that they were able to achieve a 10x speed increase in their code using a method described in the link below.

#7824 (comment)

@Try-Hello
Copy link

I have a question, can you help me?

I use pyrealsense2-aarch64 in jetson nano. I use D455。

but:
import pyrealsense2 as rs
#And then define 'pipeline':
pipeline = rs.pipeline()

error:Module 'pyrealsense2' has no attribute 'pipeline'

how to solve this problem?please。

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 19, 2021

Hi @Try-Hello I will respond to your question at the new case that you have created for it. Thanks!

@103061634
Copy link
Author

The RealSense SDK supports GLSL, an alternative method of offloading work from the CPU to the GPU. It is 'vendor neutral', meaning that it should work with any brand of video chip. The improvement may not be noticable on low-power devices though, and there is not much information about using it with Python instead of C++.

#3654
https://github.com/IntelRealSense/librealsense/tree/master/examples/gl

Advice is given in a Jetson Nano case in the link below that using GLSL may not be faster than CUDA anyway.

#7824 (comment)

So using -DBUILD_WITH_CUDA=true and not setting -BUILD_WITH_OPENMP to true may provide the best results.

The discussion linked to above does provide a comment from the RealSense user in that case that they were able to achieve a 10x speed increase in their code using a method described in the link below.

#7824 (comment)

Thx for your help!!! We will do it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants