-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
project 3D point coordinate in the camera frame to map frame #2457
Comments
Hi @najiib49 If you are aiming to translate a 3D pointcloud in ROS into a format usable in a navigation system, an approach that some RealSense ROS users have tried is a package called pointcloud_to_laserscan as demonstrated in the links below. https://answers.ros.org/question/396646/problem-using-pointcloud-data-in-nvidia-jetson-xavier-nx/ https://robotics.shanghaitech.edu.cn/static/robotics2020/MoManTu.pdf |
Hi @MartyG-RealSense Thanks for your response. I'm actually aiming to translate a 2D pixel from the camera into a format ROS2 Navigation system can use. I'm trying to project that pixel point into a 2D occupancy map grid. is this possible? --update for more clarification what i mean by project the pixel point into a 2D occupancy map grid is that I want to tell a robot to navigate to that pixel location using The route I'm taking to solve this is to change the pixel point into a 3d point coordinate. I change the 3d coordinate to be the same axis as the map frame. I then listen the transform from |
As you are using ROS2 Navigation, I will seek the advice of the developer @SteveMacenski as he works on that system. Hi @SteveMacenski Would it be possible to provide @najiib49 with advice about their question in the comment above, please? Many thanks! |
Hi @najiib49 Have you made any progress with your ROS2 Navigation issue, please? Thanks! |
Hi @MartyG-RealSense Thanks for reaching out. I am somewhat progressing on this task. I'm currently using aruco marker's pose instead of a random pixel. Now I'm trying to create a transformation between aruco marker's pose and the map frame using ROS2 Tf2. Please let me know if there's a better way to going about this. I'm using this to achieve it |
There is a recently published ROS2 package called Aruco Pose Filter at the link below with a TF2 feature that can "publish a transform from camera link to (marker_link_prefix + marker_id)" |
@MartyG-RealSense Thank you so much! This was exactly what I was looking for. |
That's great to hear, @najiib49 - thanks very much for the update :) |
Hi @najiib49 Do you require further assistance with this case, please? Thanks! |
Hi @MartyG-RealSense not anymore. Once again thank you so much for your help. |
You are very welcome. I'm pleased that I could help. As you do not require further assistance, I will close this case. Thanks again! |
Hi all,
I'm using realsense camera(d435i). I grab a pixel in space and convert it to a 3D point coordinate using the
rs2_deproject_pixel_to_point
. So far the 3D point is accurate as I've tested it withrs2_project_point_to_pixel
which returns the pixel point I've started with.My question is how do I project this 3D point into the map frame? I have a transformation of these frames
map --> odom --> baselink --> camera_link --> camera_aligned_depth_to_color_frame --> camera_color_optical_frame
What I've tried
I use tf2 transform listener from
map
tocamera_color_optical_frame
to get the location of the robot relative to the camera. I then add the x,y,z translation of this frame to the 3D point I get from thers2_deproject_pixel_to_point
. The reason I'm adding them is because I have the transformation frommap
tocamera_color_optical_frame
and that the 3D point's origin is the left imager as mentioned in this Comment (and the documentation). In order to know the point in the map frame, I assume that it was just a matter of adding these points. However, I've noticed it's not always accurate. The axis ofcamera_color_optical_frame
and the 3D point axis ofrs2_deproject_pixel_to_point
match i.e. positive x is right, positive y is down and positive z is forward. The axis of themap/base_link
frame is positive-x forward, positive-y left and positive-z up. Once I add the xyz translation and 3D point, I try to convert the result in the map frame axis (see side note for this conversion). I've also visualized these axis in RViz.Side note: converting xyz from realsense/
camera_color_optical_frame
tomap/base_link
frame axisAn interesting observation I've seen in RViz is that the axis for
camera_aligned_depth_to_color_frame
andcamera_color_optical_frame
are different. The axis forcamera_aligned_depth_to_color_frame
is similar tomap/base_link
frame andcamera_color_optical_frame
is similar to the realsense-axis i.e. positive-x forward, positive-y left and positive-z up and positive x is right, positive y is down and positive z is forward respectively. However, When echo the tf2 topic formap --> camera_color_optical_frame
andmap to camera_aligned_depth_to_color_frame
. Both have the same translation but different Rotation in Quaternion. See Terminal result belowThis shows me that my assumption to simply adding the xyz translation from tf2 listener of
map to camera_color_optical_frame
to the 3D point and then flipping this x,y,z from one axis to another isn't correct. Is this possible? if not, is there a work around to achieving it? I'd appreciate any help I can get.Additional info
I'm using the following camera topics
'/camera/color/image_raw'
rs2_deproject_pixel_to_point
function are from'/camera/aligned_depth_to_color/camera_info'
and'/camera/aligned_depth_to_color/image_raw'
respectivelyThe text was updated successfully, but these errors were encountered: