Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The X Y coordinates in the camera coordinate system calculated by the rs2_deproject_pixel_to_point function are not accurate enough, and Z is the same, no matter whether you use the aligned depth internal reference or the unaligned depth internal reference #12067

Closed
5204338 opened this issue Aug 3, 2023 · 23 comments

Comments

@5204338
Copy link

5204338 commented Aug 3, 2023


Required Info
Camera Model D400 }
Firmware Version (Open RealSense Viewer --> Click info) No problem
Operating System & Version Linux (Ubuntu 18)
Kernel Version (Linux Only) (e.g. 4.14.13)
Platform NVIDIA Jetson
SDK Version { legacy / 2.53.1 }
Language {C++ }
Segment {Robot }

Issue Description

The X Y coordinates in the camera coordinate system calculated by the rs2_deproject_pixel_to_point function are not accurate enough, and Z is the same, no matter whether you use the aligned depth internal reference or the unaligned depth internal reference, can you help me.

I have tried all the methods, but my method is wrong, or I can’t get Z through the get_distance function to ensure the high precision of X Y in the camera coordinate system.

below is my code ,please hele me.

'''
int main()
{
rs2::pipeline pipe;
rs2::config cfg;

cfg.enable_stream(RS2_STREAM_DEPTH, 640, 480, RS2_FORMAT_Z16, 30);
cfg.enable_stream(RS2_STREAM_COLOR, 640, 480, RS2_FORMAT_BGR8, 30);

rs2::pipeline_profile profile = pipe.start(cfg); 
rs2::align align_to_color(RS2_STREAM_COLOR);  

while (true)
{
    rs2::frameset frameset = pipe.wait_for_frames();
    auto aligned_frameset = align_to_color.process(frameset);

    // 基于对齐的混合流获取深度流和彩色流,进而获取流对齐后的深度内参
    rs2::video_frame color_stream = aligned_frameset.get_color_frame();
    rs2::depth_frame aligned_depth_stream = aligned_frameset.get_depth_frame();

    rs2::video_stream_profile depth_stream_profile = aligned_depth_stream.get_profile().as<rs2::video_stream_profile>();
    const auto depth_intrinsics = depth_stream_profile.get_intrinsics(); //获取对齐后的深度内参

    // 获取彩色图像宽
    const int w = color_stream.as<rs2::video_frame>().get_width(); 
    const int h = color_stream.as<rs2::video_frame>().get_height(); 

    //将视频流转换为cv mat矩阵,方便作为输入对象,输入推理器
    cv::Mat color_frame(cv::Size(w, h), CV_8UC3, (void*)color_stream.get_data(), cv::Mat::AUTO_STEP);  

    // Send the camera color image frame to the yolov5 reasoner for reasoning
    auto begin_timer = timestamp_now_float();
    auto results = engine->commit(color_frame).get();
    float inference_average_time = timestamp_now_float() - begin_timer;     

    // Iterate through the detection boxes in each frame object
    for(auto& result : results)
    {      
        float box_center_x = ((int)result.left + (int)result.right) / 2;
        float box_center_y = ((int)result.top + (int)result.bottom) / 2;
        float box_pixed_center[2] = {box_center_x, box_center_y};

        //使用get_distance函数求预测框中心点的深度值
        float box_center_depth_value = aligned_depth_stream.get_distance(box_center_x,box_center_y);

        //使用rs2_deproject_pixel_to_point函数将坐标系从像素坐标系转到相机坐标系
        rs2_deproject_pixel_to_point(camera_cord, &depth_intrinsics, box_pixed_center, box_center_depth_value);



        //使用Eigen库将坐标系从相机坐标系转到捆轧枪坐标系;默认捆轧枪坐标系 X, Y, Z轴和Realsense相机坐标系X, Y, Z轴方向一致
        Vector3d p1(camera_cord[0], camera_cord[1], camera_cord[2]);
        Vector3d p2 = T_QC * p1;

        //将捆轧枪坐标系下的预测框坐标存储到vector容器中,方便后续处理
        gun_box.x = p2.x();
        gun_box.y = p2.y();
        gun_box.z = p2.z();
        auto name = cocolabels[result.class_label];
        gun_box.label = name;

        //将所有预测框送入新的容器计算捆扎角
        gun_box_lists.push_back(gun_box);            

    }             
}

}
'''

@5204338
Copy link
Author

5204338 commented Aug 3, 2023

In my code, results is a container composed of multiple rectangular boxes. I take the center point of the rectangular box as the parameter value of the get_distance function, that is, the pixel value that needs to be measured.
And I forgot to explain the inaccuracy of the XY obtained by the rs2_deproject_pixel_to_point function. The XY inaccuracy is manifested in the fact that the XY coordinates obtained by the rs2_deproject_pixel_to_point function do not match the XY coordinates of the target object in the real camera coordinate system. The gap is relatively large.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 3, 2023

Hi @5204338 It is a known issue that when using get_distance with align_to for depth-color alignment, XY can be accurate at the center of the image and become increasingly inaccurate when moving away from the center, with very large inaccuracies at the outer edges of the image. The issue does not occur when not using alignment.

Do you get improved results if you align depth to color instead of the much rarer color to depth alignment that your script uses, and therefore use color intrinsics instead of depth intrinsics?

When performing alignment, the intrinsics that are used shoud be those of the stream that is being aligned to. So when using depth to color alignment, the color intrinsics should be used because the depth origin changes from the centerpoint of the left IR sensor to the centerpoint of the RGB sensor.


If your preference is color to depth alignment then the RealSense SDK C++ program rs-measure demonstrates it. It is the only official SDK example that uses color to depth alignment instead of depth to color. If you have installed the SDK tools and examples then your librealsense build should have a pre-made executable version of rs-measure that you can run. It should be located in the usr/local/bin folder of Ubuntu.

https://github.com/IntelRealSense/librealsense/blob/master/examples/measure/rs-measure.cpp

@5204338
Copy link
Author

5204338 commented Aug 4, 2023

Hi @5204338 It is a known issue that when using get_distance with align_to for depth-color alignment, XY can be accurate at the center of the image and become increasingly inaccurate when moving away from the center, with very large inaccuracies at the outer edges of the image. The issue does not occur when not using alignment.

Do you get improved results if you align depth to color instead of the much rarer color to depth alignment that your script uses, and therefore use color intrinsics instead of depth intrinsics?

When performing alignment, the intrinsics that are used shoud be those of the stream that is being aligned to. So when using depth to color alignment, the color intrinsics should be used because the depth origin changes from the centerpoint of the left IR sensor to the centerpoint of the RGB sensor.

If your preference is color to depth alignment then the RealSense SDK C++ program rs-measure demonstrates it. It is the only official SDK example that uses color to depth alignment instead of depth to color. If you have installed the SDK tools and examples then your librealsense build should have a pre-made executable version of rs-measure that you can run. It should be located in the usr/local/bin folder of Ubuntu.

https://github.com/IntelRealSense/librealsense/blob/master/examples/measure/rs-measure.cpp

Regarding the stream alignment in my code, the depth stream is aligned with the color stream. Are you mistaken? However, the internal parameters converted to the camera coordinate system use the depth internal parameters. According to your explanation, when aligning the depth stream to the color stream, you should experiment with the color stream internal parameters to convert X Y. Wait for me to actually test the effect, and then come to communicate with you

@5204338
Copy link
Author

5204338 commented Aug 4, 2023

Hi @5204338 It is a known issue that when using get_distance with align_to for depth-color alignment, XY can be accurate at the center of the image and become increasingly inaccurate when moving away from the center, with very large inaccuracies at the outer edges of the image. The issue does not occur when not using alignment.

Do you get improved results if you align depth to color instead of the much rarer color to depth alignment that your script uses, and therefore use color intrinsics instead of depth intrinsics?

When performing alignment, the intrinsics that are used shoud be those of the stream that is being aligned to. So when using depth to color alignment, the color intrinsics should be used because the depth origin changes from the centerpoint of the left IR sensor to the centerpoint of the RGB sensor.

If your preference is color to depth alignment then the RealSense SDK C++ program rs-measure demonstrates it. It is the only official SDK example that uses color to depth alignment instead of depth to color. If you have installed the SDK tools and examples then your librealsense build should have a pre-made executable version of rs-measure that you can run. It should be located in the usr/local/bin folder of Ubuntu.

https://github.com/IntelRealSense/librealsense/blob/master/examples/measure/rs-measure.cpp

Thank you very much, but I have a problem, in your explanation, when the stream is not aligned, use get_distance to find the Z of the pixel point, and then find the X Y coordinates, whether the pixel point is in the center of the image, the X Y coordinates are accurate , I'm confused about this? Isn’t stream alignment just to get more accurate X and Y coordinates in the camera coordinate system? If you don't do stream alignment, wouldn't the X and Y coordinates in the camera coordinate system be inaccurate? Is my understanding correct? Because I need to get the more accurate X Y coordinates of the target in the camera coordinate system.

@MartyG-RealSense
Copy link
Collaborator

I confirm you are aligning depth to color with rs2::align align_to_color(RS2_STREAM_COLOR); - my apologies. In that case, color intrinsics or aligned intrinsics should be used. It looks as though you are using aligned intrinsics, which should be correct.

rs2::video_stream_profile depth_stream_profile = aligned_depth_stream.get_profile().as<rs2::video_stream_profile>();
const auto depth_intrinsics = depth_stream_profile.get_intrinsics(); //获取对齐后的深度内参

Aligning depth and color can improve depth measuring accuracy by making it easier to distinguish between coordinates in the foreground and the background of the image. It is not compulsory to use alignment to find the depth value of a coordinate though. This is demonstrated by the RealSense SDK's rs-hello-realsense example program, which prints as plain text the depth distance in real-world meters of the coordinate at the center of the camera's view.

@5204338
Copy link
Author

5204338 commented Aug 4, 2023

I confirm you are aligning depth to color with rs2::align align_to_color(RS2_STREAM_COLOR); - my apologies. In that case, color intrinsics or aligned intrinsics should be used. It looks as though you are using aligned intrinsics, which should be correct.

rs2::video_stream_profile depth_stream_profile = aligned_depth_stream.get_profile().as<rs2::video_stream_profile>();
const auto depth_intrinsics = depth_stream_profile.get_intrinsics(); //获取对齐后的深度内参

Aligning depth and color can improve depth measuring accuracy by making it easier to distinguish between coordinates in the foreground and the background of the image. It is not compulsory to use alignment to find the depth value of a coordinate though. This is demonstrated by the RealSense SDK's rs-hello-realsense example program, which prints as plain text the depth distance in real-world meters of the coordinate at the center of the camera's view.

Thank you very much .Ok, I understand you said ,my is not problem, and I want yo ensure something, In my case, my detect targets is not the center of camera image frame, in this case, I need the most accuracy X Y coordinate of camera coordinate system,I should use that method to determine the coordinate values ​​of X Y.

@5204338
Copy link
Author

5204338 commented Aug 4, 2023

I confirm you are aligning depth to color with rs2::align align_to_color(RS2_STREAM_COLOR); - my apologies. In that case, color intrinsics or aligned intrinsics should be used. It looks as though you are using aligned intrinsics, which should be correct.

rs2::video_stream_profile depth_stream_profile = aligned_depth_stream.get_profile().as<rs2::video_stream_profile>();
const auto depth_intrinsics = depth_stream_profile.get_intrinsics(); //获取对齐后的深度内参

Aligning depth and color can improve depth measuring accuracy by making it easier to distinguish between coordinates in the foreground and the background of the image. It is not compulsory to use alignment to find the depth value of a coordinate though. This is demonstrated by the RealSense SDK's rs-hello-realsense example program, which prints as plain text the depth distance in real-world meters of the coordinate at the center of the camera's view.

My project is to measure the coordinates of 2 targets in the horizontal plane through the camera facing vertically downward (so the Z axis is roughly equal, that is, this coordinate is useless, what I need is the coordinates of X and Y), and then based on the coordinates of the 2 targets Calculate the angle with the origin of the camera coordinate system. Take the X and Y axes of the camera coordinate system as the coordinate axes (the XY axes form a two-dimensional coordinate system), and then calculate the angle between the two target points in the two-dimensional coordinate system. Simplified understanding is to form a triangle with the origin of the camera coordinate system and two target points, and find the included angle of the triangle. From your professional point of view, can you use the camera to achieve the above functions and get an accurate angle

@MartyG-RealSense
Copy link
Collaborator

It sounds as though #9297 might fit your description.

@5204338
Copy link
Author

5204338 commented Aug 5, 2023

It sounds as though #9297 might fit your description.

that case not same with me, I give your image to exlaine my scenes. I calculate angle1 and angle2 according to the X Y of node 1 and node 2, but the X Y coordinates of node 1 and 2 deviate greatly, and the calculated angle is even bigger, what should I do, please help me, thank you for your guidance
2

@MartyG-RealSense
Copy link
Collaborator

#9749 is also worth looking at.

@5204338
Copy link
Author

5204338 commented Aug 6, 2023

#9749 is also worth looking at.

ok, let me see carefully

@MartyG-RealSense
Copy link
Collaborator

Do you require further assistance with this case, please? Thanks!

@5204338
Copy link
Author

5204338 commented Aug 14, 2023

Do you require further assistance with this case, please? Thanks!

yes, I also have some problems. I get some information from 11031, I have question about method1, you explained before:
<<Method 1 - align depth to color, deproject pixel into point in color coordinate systems, then transform the point back to depth coordinate system.>>
, I want to know why I need to use rs2_transform_point_to_point function afte using rs2_deproject_pixel_to_point function ,do this transform will get more accurate X Y value of epth coordinate system ,compared with rs2_deproject_pixel_to_point function to get the X Y value in the color camera coordinate system. Thanks!

'''

rs2::align align_to(RS2_STREAM_COLOR);
rs2::pipeline pipe;
rs2::config cfg;
cfg.enable_stream(RS2_STREAM_DEPTH, 1280, 720, RS2_FORMAT_Z16, 15);
cfg.enable_stream(RS2_STREAM_COLOR, 1280, 720, RS2_FORMAT_RGBA8, 15);

   auto profile = pipe.start(cfg);

   while (app)
   {
          // Wait for the next set of frames from the camera
          auto frames = pipe.wait_for_frames();

   auto depth = frames.get_depth_frame();
          auto color = frames.get_color_frame();

          rs2_extrinsics color_extrin_to_depth = color.get_profile().as<rs2::video_stream_profile>().get_extrinsics_to(depth.get_profile().as<rs2::video_stream_profile>());

          frames = frames.apply_filter(align_to);
          auto color_aligned = frames.get_color_frame();
          auto depth_aligned = frames.get_depth_frame();

          rs2_intrinsics intr_depth = depth_aligned.get_profile().as<rs2::video_stream_profile>().get_intrinsics();

          // pixel in color coordinate system
          float u = 0.5;
          float v = 0.5;

          int w = color_aligned.get_width();
          int h = color_aligned.get_height();

          int c = w * u;
          int r = h * v;

          // assume depth and color same size after alignment, find depth pixel in color coordinate system
          float pixel[2];
          pixel[0] = c;
          pixel[1] = r;

          float point_in_color_coordinates[3]; // 3D point in color sensor coordinates
          float point_in_depth_coordinates[3]; // 3D point in depth sensor coordinates

          auto dist = depth.get_distance(static_cast<int>(pixel[0]), static_cast<int>(pixel[1]));
          rs2_deproject_pixel_to_point(point_in_color_coordinates, &intr_depth, pixel, dist);
          rs2_transform_point_to_point(point_in_depth_coordinates, &color_extrin_to_depth, point_in_color_coordinates);
   }

'''

@5204338
Copy link
Author

5204338 commented Aug 14, 2023

Do you require further assistance with this case, please? Thanks!

I also want to know get_distance function,auto dist = depth.get_distance(static_cast(pixel[0]), static_cast(pixel[1]));
this is you code, why not use depth_aligned to get depth value,but use depth. Can you help me.

@MartyG-RealSense
Copy link
Collaborator

rs2_transform_point_to_point is not related to greater XY accuracy. It is an affine transform function that uses a 3x3 rotation matrix and a 3-component translation vector. It can be used to transform points from one coordinate space to another.

An example use is merging together individual point clouds into a single combined point cloud by giving the individual clouds the same position and rotation in 3D space.


In regards to the auto dist code, it was given to me by one of my Intel RealSense colleagues so I unfortunately do not know why this line is written as it is.

@5204338
Copy link
Author

5204338 commented Aug 14, 2023

I took a screenshot of your answer, here is the link of 11031。
1

11031

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 14, 2023

I understand that I shared the code but I did not write it as it was provided by one of my Intel colleagues and so I cannot interpret the meaning of this particular 'auto dist' section of the script as I am not a program developer. I do apologize.

@5204338
Copy link
Author

5204338 commented Aug 15, 2023

I understand that I shared the code but I did not write it as it was provided by one of my Intel colleagues and so I cannot interpret the meaning of this particular 'auto dist' section of the script as I am not a program developer. I do apologize.

OK, I know, why did your colleagues write such a piece of code . Can you discuss it with your colleague, whether he wrote it wrong, what should be the correct way of writing, I really feel that you need to improve your reference documents, it is too brief, and the reference value is very low. I believe that many users have the same feeling as me.

@MartyG-RealSense
Copy link
Collaborator

You could try changing auto dist = depth.get_distance to auto dist = depth_aligned.get_distance (the aligned depth parameter that was defined earlier in the script) to see whether it makes a positive difference.

@5204338
Copy link
Author

5204338 commented Aug 16, 2023

You could try changing auto dist = depth.get_distance to auto dist = depth_aligned.get_distance (the aligned depth parameter that was defined earlier in the script) to see whether it makes a positive difference.

Ok, I know, my problem is solved,I made a mistake in the coordinate system before, and used the coordinate system after the pixed function, because your reference documents are too few, and it really wasted a lot of my time

@MartyG-RealSense
Copy link
Collaborator

I understand your frustration and sincerely apologize. I'm very pleased though to hear that you achieved a solution.

@5204338
Copy link
Author

5204338 commented Aug 17, 2023

I understand your frustration and sincerely apologize. I'm very pleased though to hear that you achieved a solution.

Thanks your help , also you can close this issue.

@MartyG-RealSense
Copy link
Collaborator

You are very welcome. Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants