-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PYRealsense2 depth post-processing takes 0.3 seconds per frame #10995
Comments
Hi @MartinPedersenpp The Hole-Filling post-processing filter would normally be disabled by default in the RealSense Viewer (one of the only filters that is default-disabled) so enabling it in your Python script may be causing additional processing compared to the Viewer. Post-processing filters are calculated on the computer's CPU instead of the camera hardware, so the more filters that are applied then the greater the processing burden that is placed on the CPU. You may be able to remove the Hole-Filling filter, as the Spatial filter provides hole filling. I also note that you are apparently setting the Temporal filter's smooth alpha to '0.1'. The default is 0.4. Reducing smooth alpha to 0.1, whilst having the effect of stabilizing depth fluctuations, will delay the updating of the depth image. This can be demonstrated by moving the camera around and observing how the depth image slowly transitions from one state to another instead of updating immediately. |
@MartyG-RealSense |
The camera hardware is not able to assist with the processing of post-processing filters or with pyrealsense2 in general, unfortunately. Alignment may also be a significant factor in your Python application's lag compared to the Viewer. The Viewer does not perform alignment between depth and color in its 2D mode (though it can map color onto depth in 3D pointcloud mode). Like post-processing, alignment is also calculated on the CPU. If C++ was being used then there would be the option to offload processing of alignment from the computer's CPU onto its graphics GPU with the SDK's GLSL processing blocks - as described at #3654 - but it involves modifying C++ rs2:: type instructions and so I have never seen it successfully used with Python projects. If your Windows computer had an Nvidia graphics chip or graphics card then it is possible that the SDK's CUDA support could be enabled in order to offload CPU processing of alignment onto the Nvidia GPU like GLSL does. This would mean though that if the program was run on computers without Nvidia graphics then alignment would not benefit from that acceleration. The SDK's CUDA support is also almost always used on Nvidia Jetson computing boards, so there are few examples of enabling that support on an Nvidia-equipped PC. |
@MartyG-RealSense |
Thanks very much @MartinPedersenpp Disabling the Spatial filter and having the Hole-Filling filter enabled may provide improved performance, as a RealSense team member states at #4468 (comment) that "from my experience spatial filter is the one taking the most time and giving least quality improvement, so you might decide to drop it". |
@MartyG-RealSense |
Issue Description
I have a program using pyrealsense2 where I set up the camera and use multiple aligned depth frames to detect objects.
I have noticed that whenever my program is performing the post processing of each depth frame, it takes around 0.3 seconds.
When I use realsense viewer I can easily get around 30FPS, so I am guessing that the issue is with my pyrealsense setup.
Pipeline setup:
Capturing frame:
The text was updated successfully, but these errors were encountered: