-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistency in different implementation of rs2_project_point_to_pixel and rs2_deproject_pixel_to_point #10811
Comments
Hi @oceanusxiv As mentioned in your other case at IntelRealSense/realsense-ros#2458 (comment) the D455 is known for its unusual coefficients compared to other RealSense 400 Series camera models and the nature of its distortion model has been previously discussed. I will consult my Intel RealSense colleagues about your concerns. |
Hi again @oceanusxiv I just wanted to update you that discussion of this subject with my Intel RealSense colleagues continues to be ongoing. Thanks for your patience! |
Whilst Intel discussions are continuing, it may take some time before there is progress to report. I have therefore added an Enhancement label to this case as a reminder to keep it open whilst those internal discussions are ongoing. |
Hello! Any news about this issue? I plan to switch from D435 to D455 because of RGB sensor larger FOV needed for my object color segmentation and then I reconstruct the pointcloud based on undistorted aligned images of depth and color streams. I would like to use the undistorted RGB images to be aligned with Depth images. Will that be possible to do with D455? |
Hi @levasmol I checked the outcome of the discussions that my Intel RealSense colleagues had about this subject. It was acknowledged that there was the possibility of a discrepancy between the documentation and how the distortion model was actually implemented. They thought that it was unlikely though that there is a fundamental flaw in the implementation, such as using the reversed version of the model from what is needed. If you are aligning with the SDK's align_to instruction then the SDK's 'align processing block' will automatically make adjustments for differences between the depth and RGB streams, no matter which camera model is used. |
Hi @levasmol Bearing in mind the response in the above comment, do you require further assistance with this case, please? Thanks! |
Case closed due to no further comments received. |
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
Some time ago there were questions in #7335 about the distortion model of the D455, and it was confirmed that it was inverse brown conrady. At the end of that issue, it was mentioned that there was an implementation of
rs2_deproject_pixel_to_point
for the inverse brown conrady. That code looks likeNote that the both involves iteration, this seems incorrect, as according to the documentation here https://dev.intelrealsense.com/docs/projection-in-intel-realsense-sdk-20#section-distortion-models, for inverse conrady
and for brown conrady
Therefore, for the same reprojection code, it cannot be true that both the forward and inverse distortion model would require iteration, one of them must be an incorrect implementation. I think in this case that the inverse brown conrady implementation is incorrect, because in the CUDA implementation of these functions in https://github.com/IntelRealSense/librealsense/blob/master/src/cuda/rscuda_utils.cuh,
rs2_deproject_pixel_to_point
for inverse brown conrady iswhich is completely different from the deprojection code in the C++ implementation, and closed form, which matches the documentation.
This same inconsistency can be observed in the
rs2_project_point_to_pixel
code too, where the code for brown conrady and inverse brown conrady projection is functionally identical, which again violates what the documentation states. Given this, I'm think the projection and deprojection code for the inverse brown conrady distortion model in the C++ implementation is exactly flipped, the section for deprojection should be in projection, and vice versa.The text was updated successfully, but these errors were encountered: