Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can the resolution of the sensor be changed when configuring the pipeline video stream? #9389

Closed
Ma0110 opened this issue Jul 14, 2021 · 16 comments

Comments

@Ma0110
Copy link

Ma0110 commented Jul 14, 2021

Required Info
Camera Model { D400(D415)
Firmware Version (05.11.01.100)
Operating System & Version {Win 10
Kernel Version (Linux Only)
Platform PC
SDK Version { legacy / 2.0 }
Language {python }
Segment {others }

Issue Description

Can the resolution of the realsense D400 series be changed when compiling with python code and using the pyrealsense2 package to configure the pipeline video stream? For example: when I use this function, config.enable_stream(rs.stream.color, 640, 480, rs.format.z16, 30). When I change the resolution of the configured pipeline video stream, but the resolution of the image obtained next is 1280*720, why? (The sensor types used are D415 and D435i)

@Ma0110
Copy link
Author

Ma0110 commented Jul 14, 2021

Looking forward to your reply, thank you!

@MartyG-RealSense
Copy link
Collaborator

Hi @Ma0110 You can define a stream resolution with your config.enable_stream instruction before the pipe.start() line of your script. You could also change an already set resolution by stopping the pipeline, changing the resolution and re-starting the pipeline.

If you are defining a color stream then you can use the RGB8 format instead of Z16, which is the depth format. For example:

config.enable_stream(rs.stream.color, 640, 480, rs.format.rgb8, 30)

If you define a custom resolution with config.enable_stream then you should also insert the word 'config' into the brackets of the pipe start instruction, otherwise the program will ignore the custom configuration and use the default stream configuration of that particular camera model instead. For example:

pipe.start(config)

@Ma0110
Copy link
Author

Ma0110 commented Jul 14, 2021

Thank you very much for your feedback, the problem is solved, thank you.

@MartyG-RealSense
Copy link
Collaborator

Great news @Ma0110 - thanks for the update!

@Ma0110
Copy link
Author

Ma0110 commented Jul 14, 2021

Hi @MartyG-RealSense, I have encountered a new problem. When I use the cv2.flip() function to flip the color image obtained in the pipeline, the flip operation is successful, but the accuracy is very impressive during the next ranging Disappointed. Is there a similar function? Just like I can configure the pipeline resolution. Assuming there is such a function (it can flip the result obtained by frames.get_color_frame()), then I can get the video stream obtained after flipping, eliminating the need for operations like cv2.flip() function. Will the problem of inaccurate ranging accuracy be improved? Could you please help me answer it? Thank you!

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 14, 2021

A cv flip is the only method that I know of to flip a color image.

Could you confirm what you mean about the 'ranging accuracy' please? You are not talking about depth accuracy over distance, right?

You may find interesting the rs-kinfu (RealSense KinectFusion) guide in the link below, where cv flip is used to flip the image and then the frame is updated.

https://titanwolf.org/Network/Articles/Article?AID=185dd26d-d5f3-45ad-87b3-ceef9c3020c3#gsc.tab=0

More information about rs-kinfu is available here:

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/opencv/kinfu

@Ma0110
Copy link
Author

Ma0110 commented Jul 14, 2021

To answer your question first, what I mean by "ranging accuracy" refers to the depth accuracy of the distance.

Next, let me talk about my processing of color pictures. If I use cv2.flip(), then use rs.rs2_deproject_pixel_to_point() and aligned_depth_frame.get_distance() to measure the distance of a point in the color picture and return its three-dimensional coordinate point. In the same process, the depth accuracy in measuring distance is accurate when cv2.flip() is not used, while the depth accuracy in measuring distance is inaccurate when cv2.flip() is used.Thank you!

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 14, 2021

Thank you for the confirmation about ranging accuracy. I wanted to make sure that we were both on the same page in our understanding. :)

I can see the dilemma that you have. If you flip the color image but the depth image is not flipped then the objects in the scene that the depth measurement are based on would be on the opposite side in the flipped color frame. It could be like mapping an flipped RBG image with a wall on one side to depth image coordinates where the wall is not there because the depth is unflipped.

What is the reason for performing the color flip please?

In regard to accuracy over distance: though the L515 has a maximum range of around 9 meters, it is better to consider 4 meters as a practical limit. You may receive improved depth reults if you apply the SDK's built-in Short Range camera configuration preset using the programming instruction RS2_L500_VISUAL_PRESET_SHORT_RANGE

C++
#9071 (comment)

Python
#8161 (comment)

@Ma0110
Copy link
Author

Ma0110 commented Jul 14, 2021

Yes, you said nothing wrong. I also considered that the color map and the deep map you mentioned should be flipped, so I flipped it, but the depth accuracy of the measured distance is not accurate. (When I am measuring the depth accuracy of the distance, the distance between me and the camera is within 1 meter)

I do the flip to achieve synchronization with the direction of movement in my real life, just like looking in a mirror. In this case, how I move the displayed picture also moves.

Finally, regarding the distance limit you mentioned, I did all the tests within 2 meters. I think this should not be the main factor causing the inaccurate depth accuracy. thank you!

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 14, 2021

How about aligning depth to color without flipping the color and generating a 3D point cloud, and then rotating the entire textured point cloud afterwards to match the preferred visual orientation?

@Ma0110
Copy link
Author

Ma0110 commented Jul 14, 2021

It can be understood this way too! What should I do if this is the case? In this case, I tried to use cv2.flip() to flip the color map and the depth map. Could it be that it completely changed the mapping relationship between the original color map and the depth map, so that if I continue to measure the distance and depth, is it not accurate?

@MartyG-RealSense
Copy link
Collaborator

I would assume that if a textured point cloud is generated without flipping color and then the entire cloud is rotated then the coordinates would preserve their correct depth values, because you are changing the perspective that the points are viewed from rather than the data of the points themselves.

You can see an example of rotating a point cloud (by holding down the left mouse button and moving the mouse) by launching the pre-built executable rs-pointcloud program in Windows. If you have installed the Windows version of the RealSense SDK then you can find the pre-built examples - including rs-pointcloud - by right-clicking on the RealSense Viewer launch icon on the Windows desktop and selecting the Open file location menu option.

When the textured point cloud is rotated in rs-pointcloud, you can see that the depth detail of the point cloud is preserved correctly even though the viewpoint is changing.

I believe that a pointcloud-based approach instead of a color flip is going to be the best way to develop this project.

@Ma0110
Copy link
Author

Ma0110 commented Jul 15, 2021

Okay, thank you for your prompt reply, thank you!

@Ma0110
Copy link
Author

Ma0110 commented Jul 15, 2021

According to what you said, the point cloud is flipped, can I understand it as flipping the depth map. (I can say why I understand it this way, because the data resources I can get from the sensor in the project code are the color map and the depth map, so I think the point cloud and the depth map you are talking about are the same thing)

If this is the case, how should I implement it in the code? Which function should I use to perform point cloud flipping (or depth map flipping)?

@MartyG-RealSense
Copy link
Collaborator

'Flipping' may suggest that the data is being changed. 'Rotating' is a better term, since you are are not changing the point cloud data, just the direction that the cloud is being viewed from.

The SDK Python example program opencv_pointcloud_viewer.py demonstrates point cloud rotation.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/opencv_pointcloud_viewer.py

@Ma0110
Copy link
Author

Ma0110 commented Jul 15, 2021

Ok,thank you!

@Ma0110 Ma0110 closed this as completed Aug 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants