-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The accuracy of the distance between 2 points is high in the center but is low on the edge, how can I improve it? #11293
Comments
Hi @weicuiting As the Decimation filter will be reducing the resolution of the depth image, are measurements more accurate if you comment out the Decimation filter, please? |
If the measurements are accurate at the center but become increasingly inaccurate when moving towards the edge of the image, this is usually because of an inaccuracy in the depth-color alignment. You seem to have applied alignment correctly in your script though and placed the align process after the post-processing filter list as Intel recommend. Does accuracy improve if you align color to depth instead of depth to color by changing the align_to instruction from 'color' to 'depth'.
|
As you are using pc.calculate to generate the point cloud and map_to to map color onto the depth points, it may not actually be necessary to use align_to in order to align depth to color as map_to is performing an alignment instead. So you may be aligning twice in this script when only map_to is necessary. |
Thank you for the responce! I tried 'align_to = rs.stream.depth', the trend still didn't changed. More important, it is not suitbale for my program to use 'align_to = rs.stream.depth' . This would result in the decreasing of RGB resolution and the black edges which affects the segmentation accuracy of rebars in the RGB photos(as the green masks shown in the following pictures). |
Are you able to comment out the align instructions and let map_to perform the pointcloud alignment as suggested above at #11293 (comment) |
I comment out the align part, icluding 3 lines: def get_aligned_images():
But in the process of calculate the 'vtx', an IndexError occurred: index 1578 is out of bounds for axis 0 with size 640, where the 1578 is from RGB while the 640 is from depth. Should I change the x and y pixel values according to the resolution rate(rate x = 640/1920, rate y = 360/1080)? def get_3d_camera_coordinate(depth_pixel, aligned_color_frame, aligned_depth_frame, aligned_frames):
|
Yes, I would recommend the x and y pixel values according to the resolution rate. align_to uses the RealSense SDK's 'align processing block' to automatically adjust for differences between the depth and color streams such as different resolutions. As far as I am aware, these automatic adjustments for differences do not take place with map_to. |
Can I understand by this way that it's better to keep the same resolution between RGB and depth(such as 1280*720) if I use 'map_to'. Because the resolution rate maybe cause the bias too? |
Yes, if using map_to I recommend using the same resolution for both depth and color. |
It's really sorry to let you waiting. I was just calculating the results. I used the same solution(1280*720) for the RGB and depth. and commented out the 'Decimation filter' and 'align_to' to keep the depth resolution. But the result is the worst until now: Would the big bias cause by the different HOV between RGB and depth?When using the 'map to' alone didn't align the RGB and depth even they have the same resolution? Currently, the original method(using 'align_to' and 'map to') is better. Is there any other method to improve the edge inaccuracy using depth-color alignment? Can I get the pointcloud using python as accurately as using realsense viewer? |
Comparing your code to the align_depth2color.py example that the script seems to be based on, I note that you use this line:
Whilst in align_depth2color.py it uses frames in the brackets instead of pc_filtered.
The frames information comes from the |
Does this mean that I can't use the filters or use the filters after align? frames = pipeline.wait_for_frames() or frames = pipeline.wait_for_frames() |
You can still use the filters, yes. Intel's recommendation is to place align.process after the post-processing filter list, as it helps to avoid distortions such as aliasing (jagged lines). This is a recommendation rather than a requirement though, and there are rare cases where an application has performed much better when placing align.process before the post-processing filters. |
I have tried using align before filters, but it's a pity that the method didn't work.Is there other methods to improve the align accuracy or depth quality? (I'm think is there any problem with the depth quality) |
My understanding is that the RealSense Viewer pointcloud in its 3D mode is based on pc.calculate and map_to, and does not make use of align_to. RealSense Viewer is also a C++ application rather than a Python one. You could check whether there is a mis-calibration of your camera's depth sensing by resetting it to its factory-new default calibration in the RealSense Viewer using instructions at #10182 (comment) |
OK, I'll calibrate the camera again. Do I just need to do the on_chip calibration, tare calibration and dynamic calibration? |
Whilst on-chip calibration can be used to calibrate the camera, simply using the Viewer's factory-default calibration reset can work just as well. On-chip calibration improves depth image quality, whilst tare calibration improves depth measurement accuracy. Dynamic Calibration is a different method of calibration to on-chip that has the benefit of being able to calibrate the RGB sensor too. The grid of rebar objects has the potential to confuse the depth sensing algorithm of the camera by forming a repetitive pattern (a series of similar looking objects in horizontal and vertical arrangements, like ceiling / floor tiles). Intel have a guide at the link below to reducing the negative impact of repetitive patterns. https://dev.intelrealsense.com/docs/mitigate-repetitive-pattern-effect-stereo-depth-cameras |
Thank you for the links, I'll have a try! |
Sorry to bother again, is there any method to plot the points on the pointcloud.ply imported from map_to?(I want to check the point location on the pointcloud) |
Once a ply is exported then you can import it into other tools and also pointcloud processing libraries such as PCL and Open3D but not import a .ply directly back into the RealSense SDK and access its depth information. A .bag file is the best format for reading recorded depth data back into an SDK script. |
Is there any recommended samples for Open3D? |
There are some Open3D examples for RealSense at the link below. http://www.open3d.org/docs/0.12.0/tutorial/sensor/realsense.html The official RealSense documentation for the Open3D wrapper also has some Python example code. https://github.com/IntelRealSense/librealsense/tree/master/wrappers/open3d |
Thank you very much! I'll have a try! |
Hi @weicuiting Do you require further assistance with this case, please? Thanks! |
Case closed due to no further comments received. |
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
Hi, sorry to bother you. I have read many issues but don't find the proper solution. I'm trying to measure the diameters of rebars in the RGB photos taken by realsense D435i. There is the problem that the accuracy of diameters is high for the rebars in the center but is low for the rebars on the dege As shown in the following figure(the reality is on the up and left, the calculation is on the down and right using the Euclidean distance).
I'm thinking is there any key point ignored. I don't know whether is the problem that the pointcloud gained by python is not aligned correctly, because the rebars seems not to lie on the highest plcace and have a bias.
This is the code I transport the points from pixels to the camera coordinates(using the method of point cloud):
import pyrealsense2 as rs
import numpy as np
import cv2
''' camera setting '''
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 1280,720, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1920,1080, rs.format.bgr8, 30)
profile = pipeline.start(config)
pc = rs.pointcloud()
points = rs.points()
#Define filters
#Decimation:
decimation = rs.decimation_filter()
#Depth to disparity
depth_to_disparity = rs.disparity_transform(True)
disparity_to_depth = rs.disparity_transform(False)
#Spatial:
spatial = rs.spatial_filter()
spatial.set_option(rs.option.holes_fill, 0) # between 0 and 5 def = 0
spatial.set_option(rs.option.filter_magnitude, 2) # between 1 and 5 def=2
spatial.set_option(rs.option.filter_smooth_alpha, 0.5) # between 0.25 and 1 def=0.5
spatial.set_option(rs.option.filter_smooth_delta, 20) # between 1 and 50 def=20
#Temporal:
temporal = rs.temporal_filter()
temporal.set_option(rs.option.filter_smooth_alpha, 0.4)
temporal.set_option(rs.option.filter_smooth_delta, 20)
colorizer = rs.colorizer()
#Get info about depth scaling of the device
depth_sensor = profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()
print("Depth Scale is: " , depth_scale)
#align to color
align_to = rs.stream.color
align = rs.align(align_to)
def get_aligned_images():
def get_3d_camera_coordinate(depth_pixel, aligned_color_frame, aligned_depth_frame, aligned_frames):
x = np.round(depth_pixel[1]).astype(np.int64)
y = np.round(depth_pixel[0]).astype(np.int64)
The text was updated successfully, but these errors were encountered: