-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rs2_deproject_pixel_to_point(): range of "points" value #10723
Comments
Hi @dennewbie My understanding from the very good reference at #1413 is that [3] represents the depth intrinsic, [2] is the XY coordinates of a particular pixel - such as [100,100] - and depth is the depth unit scale of the particular RealSense camera being used. Whilst the depth unit scale can be retrieved in real-time with an SDK programming instruction, for all RealSense 400 Series models except D405 the default depth scale value is 0.001 (on D405 it is 0.01). This scale value does not change unless done so deliberately by the program code or by a user input, so it can be easier to hard-code the number value into the script instead of retrieve it from the camera with code. Reading #1413 from beginning to end should help a lot to understand how the rs2_deproject_pixel_to_point instruction works. |
Hi @MartyG-RealSense thank you for your answer. I think that everything will be clearer attaching the following snippet.
After this, I use I was thinking to a function that could make a "mapping" between RealSense x, y coordinates in order to have values in terms of meters. I found useful information here: Projection in Intel RealSense SDK 2.0 when there's the paragraph Point coordinates. Hope to read soon from you, |
The origin of the depth coordinate system is the center-line of the left infrared imager. Coordinate values are plus or minus depending on whether they are to the left / right or above / below the center-line's 0,0,0 origin, as described in the documentation link below. Also bear in mind that the field of view (FOV) size of the camera may not be able to see the entire height and width of the room at the same time because it will not all fit into the viewpoint of the camera. |
Perfect, thank you. So based on this information I could use an approach like the following one to convert each x and y value to meters (given the room height and width): How to determine coordinates on two different size rectangles. In this way I've x, y and z values expressed in meters. That's this the correct approach? P.S. |
Deprojection converts 2D XY pixel values to 3D XYZ world point values in meters, yes. Another way to obtain real-world distance in meters is to multiply the raw 16-bit depth value uint16_t by the depth unit scale. For example, a raw depth value of 65535 multiplied by 0.001 = 65.535 meters. More information about this can be found at #2348 |
Perfect, thank you very much for the effort and the patience. I'll update here if I would be able to solve this problem in this way. |
Just to update the issue. I was able to convert properly the 2D XY pixel values to 3D XYZ world point values in meters (deproject). After this, since X and Y values are in a particular range, as said in a previous comment, I've "mapped" them to another 2D XY coordinate system using How to determine coordinates on two different size rectangles. The camera's coordinate system has origins in the center of the camera approximately: the X-axis goes right, the Y-axis goes down and the Z-axis goes forward. Since there are no problems with my application on the Z-axis values, I don't change them. Now my X axis starts at the minimum left point seen from the camera, while the Y axis starts from the minimum bottom point seen from the camera (e.g. imagine a cube 1x1x1 meters and put the camera at the center of one cube's inner wall. The camera is oriented versus the center of the cube. The X-axis starts at the left inner wall, the Y-axis start at the floor and Z-axis is unchanged). |
@MartyG-RealSense please, let me know what do you think of this approach. In this way, I can evaluate for instance the height of a person immediately, as well as the position of a person inside a room in a further application. |
Your approach seems reasonable to me in terms of how you are keeping the axis directions of the other XY coordinate system consistent with the original RealSense axes. It reminds me of a project at #9749 |
Perfect, thank you for the feedback and the support. Yes, the two projects have something in common. |
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
Hi, I'm using
void rs2_deproject_pixel_to_point (float point[3], const rs2_intrinsics * intrin, const float pixel[2], float depth )
. I would like to understand the range of values just forpoint[0]
andpoint[1]
, sincepoint[2]
corresponds tofloat depth
. It seems that these values are between -1 and 1. That's right? Or are the bounds different? Furthermore, that's correct to assume that these values correspond to the coordinates (x, y, z) of a point in a 3D Space?Thank you in advance,
Denny
The text was updated successfully, but these errors were encountered: