-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
depth inaccuracy #2523
Comments
[Realsense Customer Engineering Team Comment]
|
Thanks for your answers !
|
Addition to that, do you think I understand align process right? because I measure accurately the height of a stick by using rs-measure. However, my code measures with error of ~7%. This means that something is wrong with my code? |
Hello @kubrakaracan, The accuracy measurement and profiling is convoluted, so I'll try address specific points:
In order to mitigate some of the errors produced during the alignment you may use the following methods:
|
I solved the case in screenshot but the accuracy is still an issue. waiting for your answer! thanks in advance! |
Hi @kubrakaracan, |
Hi @YangJiao1996 ,
|
The absolute accuracy can only be improved with a recalibration. |
Hi,
I am trying to conduct an experiment by using D435. The experiment is about comparing theoretical and actual displacements of my robot. D435 is helping me out on finding the actual displacement. However, error on the displacement on z-axis turned out to be ~ 8% (for example I need to find the height of a stick as 14 cm but I found it as 13.1). The camera is on a tripod and away from my workspace 80 cm. Also I checked the z-distance on depth-quality tool and it also has error ( it says distance to ground 745 mm but it is 800mm). Basically, my code takes color image as input and segments colors by using connected components algorithm. Then I give start and end pixels detected on the color image to rs2_deproject_pixel_to_point(). In deprojection function, I am using aligned depth frame intrinsics and the distance coming from get_distance-again w/ aligned depth frame. I am conducting a delicate experiment so I need high accuracy. What can be improved in my code/procedure or what is wrong? I also reviewed the issue's similar to mine and applied on my code but they didn't work... Thank you in advance!
My code's working procedure as follows;
int main(int argc, char * argv[]) try
{
rs2::pipeline pipe;
rs2::config cfg;
}
catch (const rs2::error & e)
{
std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
return EXIT_FAILURE;
}
catch (const std::exception& e)
{
std::cerr << e.what() << std::endl;
return EXIT_FAILURE;
}
void run( const Mat &img, int threshval, const rs2::depth_frame& frame, pixel u, pixel v, bool quiet )
{
Mat bw = threshval < 128 ? (img < threshval) : (img > threshval);
Mat labelImage(img.size(), CV_32S);
int nLabels = connectedComponents(bw, labelImage, 8, CV_32S);
vector colors(nLabels);
colors[0] = Vec3b(0, 0, 0);
for(int label = 1; label < nLabels; ++label)
{
colors[label] = Vec3b( (rand()&255), (rand()&255), (rand()&255) );
}
Mat dst(img.size(), CV_8UC3);
for(int r = 0; r < dst.rows; ++r)
{
for(int c = 0; c < dst.cols; ++c)
{
int label = labelImage.at(r, c);
Vec3b &pixel = dst.at(r, c);
pixel = colors[label];
}
}
Mat seeMyLabels;
normalize(labelImage, seeMyLabels, 0, 255, NORM_MINMAX, CV_8U);
applyColorMap(seeMyLabels, seeMyLabels, COLORMAP_JET);
}
float dist_3d( const rs2::depth_frame& frame, pixel u, pixel v)
{
float upixel[2];
float upoint[3];
float vpixel[2];
float vpoint[3];
}
The text was updated successfully, but these errors were encountered: