Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

D435 running on high-reflective ground gives some mistakes depth data #10229

Closed
MichaelPan0905 opened this issue Feb 10, 2022 · 26 comments
Closed

Comments

@MichaelPan0905
Copy link

MichaelPan0905 commented Feb 10, 2022

librealsense 2.50.0 RELEASE
OS Linux
Name Intel RealSense D435
Serial Number 030522071656*
Firmware Version 05.13.00.50
Advanced Mode YES
Camera Locked YES
Usb Type Descriptor 3.2
Product Line D400
Asic Serial Number 030323023196
Firmware Update Id 030323023196

*This problem is not only for this camera, but for all D435/D435i/D455 cameras we have.

Hello Marty,
I've got other problems when using realsense cameras on some kind of high-reflective ground like tiled floor or epoxy floor.
epoxy floor
an example of epoxy floor

There are two main issues that I need help.
First is the depth data on high-reflective ground becomes unstable or even vanished, typically when the camera is fixed with a big angle to the ground.
there are servial conditions when running on such high-reflection ground (the ground is a plane):
unstable up
unstable down
situation of data unstable

left and right data vanished
situation of data missing

data vanished 1
data vanished 2
the situation of depth data missing by watching in realsense-viewer, camera is about 90° to the ground and the position of red arrow is usb

You know if we want to use the depth data to detect some tiny obstacles or cliffs, we need the camera's data to be stable and complete.So I wander if there some way to deal with this kind of situation? Thanks.

Second is some advanced edition of the first one. We met with an environment like this: the road of underground part lot is epoxy flooring, an there are many traffic signs painted by white paint on the road, and because of the skylight there are servial parts of the road is straightly exposured under the sunlight.Like the picture below:
sunlight road
a road in underground parking lot with sunlight
When the camera met with the white trafic signs exposured under the sunlight, there would be terrible mistakes of the depth data, like those wihte lines and arrows in picture above.

front 1
front 2
2021-09-06 16-16-42 的屏幕截图
2021-09-06 16-16-52 的屏幕截图
though points in red circle is where the wihte paint arrow should be.

We found that this has sth to do with the camera's exposure and posture.
Normally when we use auto-exposure, this problem will occurred when the camera met the traffic signs. But if we disable the auto-exposure and set the exposure to a small one (like 500), this mistake will disappear, that there will be no hole or curve. But the depth data also becomes unstable and there are many many noise points near the ground.

And the shape of wrong depth data is related to the posture of the camera. If we call the posture in the picture below is "the right posture":
the right pos
then under this posture, there will be a concave like below:
the front camera
it is under the ground.

but if we turn the camera 90° to the right or left(in order to fix on the side of the robot), then we will get these result:
side 1
side 2
the position of red arrows is the pose of usb on camera.

How could I deal with this kind of situation?
Thanks!

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Feb 10, 2022

Hi @MichaelPan0905 Negative implications for depth readings when observing a reflective floor surface is a known phenomenon. The negative impact on the depth image of glare from reflections can be significantly reduced by applying an external filter called a linear polarization filter over the lenses on the outside of the camera. Section 4.4 of Intel's white-paper document about optical filters provides more detail about this.

https://dev.intelrealsense.com/docs/optical-filters-for-intel-realsense-depth-cameras-d400#section-4-the-use-of-optical-filters

The image below, taken from that section, demonstrates the difference that the filter can make to glare reduction when applied.

image

If you require a scripting-based solution for sensing a reflective surface then you could try aligning depth to color, like in Intel's Python example depth_under_water:

https://github.com/IntelRealSense/librealsense/blob/jupyter/notebooks/depth_under_water.ipynb

In a past case involving reflectivity, a RealSense team member also advised: "In extreme case where you have brightly illuminated surfaces and really dark surfaces, it is impossible to find one value of sensor exposure without having either completely black or completely white regions, but for this case, the camera does offer a control to rapidly switch between two values of exposure".

You can test in the RealSense Viewer whether alternating the emitter makes a positive difference by enabling the Emitter On Off option under Stereo Module > Controls

image

In regard to your second question, it may be worth trying the alternating emitter option with the floor signs to see if it provides improved results for investigating further possible fixes.

@MichaelPan0905
Copy link
Author

Thanks for quick responding, I'll try these methods later.

@MichaelPan0905
Copy link
Author

MichaelPan0905 commented Feb 14, 2022

I tried linear polarizers but it didn't work. (actually we tested long time ago to solve the reflection of the fluorescent lights or the car's headlights, but it can't help with such strong sun light.)
I also tried the "Emitter On Off " or "HDR", none of them helped. Maybe there is something else I need to adjust?
Here are some pictures that can explain what I've met better:
2022-02-11 11-40-26 的屏幕截图
2022-02-11 11-39-58 的屏幕截图
2022-02-11 11-40-45 的屏幕截图
2022-02-11 11-40-52 的屏幕截图

I'll keep trying other methods to fix this.
By the way, what is the "alternating emitter option " you've mentioned? Is it the "Emitter on off" option?
Thanks.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Feb 14, 2022

Your depth image looks okay at the start in the comment above, with its color progressively shifting from blue at the closest point to the camera to red at the furthest distance (which is the expected default behaviour of the Viewer for colorizing the depth values according to distance from camera). Then the image deteriorates in the subsequent pictures.

Given that you describe the depth data as becoming unstable or vanishing in your original message, please try disabling the Viewer's two GLSL options in its settings interface, as they can cause such vanishing of the 3D point cloud on some computers when they are enabled. Instructions for disabling the GLSL options can be found at #8813 (comment)

@MichaelPan0905
Copy link
Author

Thanks for responding. I'll try it later. But for the above two depth images, actually they are not okay. Because the color shuold have a smooth transition, but it didn't. Actually, those four iamges came from a same place at the same time.
2022-02-11 11-40-26 的屏幕截图
like in the image above, the area in white circle should be yellow because is plane.
2022-02-11 11-39-58 的屏幕截图
at another sight, the areas along the blue dotted line should be the same color because they are at the same distance to the camera, but there is a abrupt change at the red arrow place.

@MichaelPan0905
Copy link
Author

Besides, I said the depth data as becoming unstable or vanishing, not just looks like, but truely affects using. I wonder that disabling those GLSL options only affect the visual effects but does no help to the actual points we get, is that right?

@MartyG-RealSense
Copy link
Collaborator

Depth frames are constructed from the camera's left and right infrared frames. I would speculate that if the infrared frames are viewing the bright white areas as plain and textureless due to the glare from reflection then it is mis-reading the depth because there is no texture on the surface for the camera to analyze for depth information. Ordinarily in such a situation, projecting the infrared dot pattern from the camera's projector onto the unreadable surface would make it analyzable, but the light conditions on the surface may be making the dots invisible to the camera.

In regard to GLSL, a RealSense user did a comparison of the point cloud image with and without GLSL at #10005

@MichaelPan0905
Copy link
Author

Hi @MartyG-RealSense , I've tried disabling the GLSL options, but I'm not sure that there is some difference.
Here is the screenshot, it's about 50 seconds.
https://user-images.githubusercontent.com/25095900/154658766-24fd671e-3bb0-4891-a826-c025378f2586.mp4
It was watching some white painted traffic sign on the ground, and under a cloudy light.

@MartyG-RealSense
Copy link
Collaborator

Let's take another look at the RGB scene and analyze its detail.

image

Whilst the surface is glossy, the RGB image suggests that is not hugely reflective like the tiles of an indoor office floor and there is texture on the ground that should be analyzable for depth information by the camera - it is not like a plain white wall with low surface detail.

The infrared image tells a different story though. As discussed earlier, on that image the floor detail seems to be completely overwhelmed by the large overhead illumination, leaving a plain white area with no analyzable texture. This could account for why the depth's color shading gradient starts off correctly at first and then sharply changes in the solid red representing far-distant depth values. The size and strength of the overhead lighting may be overwhelming for the camera.

image

Does the image significantly improve if you map the RGB data onto the depth points? This can be done in the RealSense Viewer by first enabling Depth and then secondly enabling RGB, and the RGB should automatically map onto the depth points to create a depth-color aligned image. Alignment can help the camera to more accurately distinguish between the foreground and background.

If you are not able to map RGB onto depth in your particular project, does the image improve if you go to Stereo Module > Post-Processing, expand open the list of post-processing filters and enable the Hole-Filling filter by left-clicking on the red icon beside it to change it to blue (On).

@MichaelPan0905
Copy link
Author

Thanks for your patiently responding.Those two things you've mentioned, RGB mapping and hole-filling, actually I've tried long time ago, and my impression is that they have no help. The RGB mapping can't fill such a big hole, and neither the Hole-Filling, while the latter actually made some more noises at the edge of the hole.
Today it's cloudly and a little bit raining, I will try what you've told again but I don't know if I can reappear the scene of glaring sunlight. Maybe I'll try again when it's sunny:)

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Feb 19, 2022

I carried out some general non-RealSense research of using cameras to capture reflective epoxy floors. Suggestions included not projecting a light source onto the observed surface at the same angle that the camera is facing, and not to project an infrared light source. The camera's projector and its IR Emitter does both of these.

So as 400 Series cameras can use the ambient light in a scene to analyze surfaces for depth detail as an alternative to analyzing the projected dot pattern, it may be worth trying disabling the projector in the RealSense Viewer if you have not tried it already. This can be done either by setting Laser Power to '0' or by setting the Emitter Enabled drop-down menu to 'Off'.

image

Turning off the projector in dim lighting conditions has the disadvantage of noticably reducing the quality of the depth image. If there is strong lighting in the scene though then the impact should not be so bad, as the camera can make use of any visible or near-visible light source. It may not be a suitable solution if the scene relies on sunlight from the roof instead of a consistent artificial source such as the overhead ceiling strip lights that are off to the side of the image though. It is possible to accompany the camera with artificial light sources such as an IR illuminator lamp, which may not have as much of a negative impact on the image if they are casting their light from a different angle to the angle that the camera is pointed at.

@MichaelPan0905
Copy link
Author

Hey Marty, thanks for responding.
While whether it works or not, turning the projector off may not match our need, because we need our robot to run on both strong lighting places and dark lightless places. Maybe we need to make some auto exchange function to change between light and dark mode, If off the projector does help. (I'll try it latter, anyway thank you.)
As for my origin question, the first one, that the camera's depth data became unstable or even vanished, I got some new information, that may have something to do with the second question.
Here are two vidoes for our camera running on tiled floor, up left is the rgb view of the camera:

realsense-2022-02-19_17.01.47_edit.mp4
realsense-2022-02-19_16.30.40_edit.mp4

And for comparison, below is the same scene of the camera in the second video, while the only difference is the lights were turn off:

realsense-2022-02-19_16.32.33_edit2.mp4

I thought it's only the material of the floor that affects the camera's data, now it does have sth to do with the light, while that is to same, the origin two questions I've submitted may be actually the same one.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Feb 21, 2022

In the image of the Emitter Enabled drop-down menu that I posted above, there is an option called Laser Auto. My understanding is that this enabled the camera to decide whether to turn the IR emitter on or off depending on the current lightng conditions.

Intel 'deprecated' this feature as far back as 2018, as mentioned in #1793 (comment) - usually, deprecated doesn't mean removed but instead "we don't recommend using this as it may be removed in the future". So you could try the Laser Auto option in different lighting conditions to see whether it turns the emitter on or off depending on the current lighting level. Under most circumstances that I have tested in, using Laser Auto has the same effect as setting the emitter to Off but I have not tested it in near-dark or total dark conditions.

An alternate approach to using Laser Auto that one RealSense used was to continously monitor the exposure metadata values, using a method described by a RealSense team member shared at #1624 (comment)

There was a case with a tiled foor at the link below that had a similar pointcloud to yours. In their case, using a linear polarizer filter helped a lot (you have already tried these filters), though they apparently didn't have the variable lighting level problem in the indoor location that they used the filter in.

https://support.intelrealsense.com/hc/en-us/community/posts/360043612734-Issue-with-tiled-floor

@MichaelPan0905
Copy link
Author

Well…I don't know why can't I upload new videos, even kept less than 10MB. Uploading images also failed.
Hi @MartyG-RealSense , thanks for your advices, and these days I've tried most of them but actually none worked.
first is to try rgb mapping, well the hole and the curve on the hole's boder are still there.
Then I tried hole filling in the Post-Processing, not surprisingly it can't fill the big hole and even makes some noises.
I also tried turning off the projector, actually I already tried it before by setting the Emitter "off", and "Laser auto" or "Emitter on off"also been tried. I don't know if the depth should be reopen after setting(I forgot to try it), but set the Emitter "off" didn't make any change to the capture. And mostly "Laser Auto" or "Emmiter on off" didn't make any difference.

@MichaelPan0905
Copy link
Author

MichaelPan0905 commented Feb 25, 2022

Done,below two screen records are what methods I've tried.

realsense-2022-02-23_12.10.18_part1.mp4
realsense-2022-02-23_12.10.18_part2.mp4

@MichaelPan0905
Copy link
Author

MichaelPan0905 commented Feb 25, 2022

And I wonder one thing that, I've tried other two different cameras from other companies, and they all do well with glaring sunlight and reflective grounds.But they are not well dealing with dark and reflective materials like balck cars.
There are their performances for the same scene.
orbbec DaBai Pro:
2022-02-25 18-33-07 的屏幕截图-dabaipro
another camera:
2022-02-25 18-29-28 的屏幕截图-ds462

They both don't have rgb and both can't see reflective and dark materials, their technical supports said it might because their camera's laser power is lower than realsense, which leads to their blind on dark cars, but I guess that might also be an important reason of why they are good at sunlight and reflective floor. That is the same with your opinions about the laser, the projector and so on. But when I testing realsense, turning off the emmiter nearly makes no difference. So is that I misunderstand sth or using a wrong method testing?
Now the only thing I've tried that makes differences is to adjust the exposure time. The shorter it is, the better it can deal with the sunlight, but also the worse is its data's stable.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Feb 25, 2022

Depth cameras in general (not only RealSense) have difficulty reading depth detail from dark grey and black surfaces because it is a physics principle that those color shades absorb light. The darker the color shade, the less depth information that is returned.

If the emitter is turned off then there needs to be a strong light source present so that the camera can use the light instead of the projected dots to analyze surfaces for depth detail, otherwise the depth image quality worsens.

A test I conducted in the past week with depth exposure found that - in the scene I was testing in, at least - the depth image remained relatively intact when reducing it as far down as '1000', but below that the depth image progressively broke up more and more as depth exposure was reduced further towards zero.

Intel's camera tuning guide has a section at the link below about using the camera in strong sunlight. It suggests defining an auto-exposure region of interest (ROI) in the lower half of the image and or reducing manual exposure to near 1 ms.

https://dev.intelrealsense.com/docs/tuning-depth-cameras-for-best-performance#section-use-sunlight-but-avoid-glare

@MartyG-RealSense
Copy link
Collaborator

Hi @MichaelPan0905 Do you have an update about this case that you can provide, please? Thanks!

@MichaelPan0905
Copy link
Author

@MartyG-RealSense Sorry for not answering you timely! These days I've got many other works to do so I didn't make time to do the test. I've try it next week. Thanks for your attention for this.

@MartyG-RealSense
Copy link
Collaborator

You are very welcome. I look forward to your next report. :)

@MartyG-RealSense
Copy link
Collaborator

Hi @MichaelPan0905 Have you had the opportunity to perform further testing regarding this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

As there have been no further comments for the past month, I will close this case for the moment. You are welcome to re-open the issue at a future date when you are ready to resume. Thanks again!

@Sowmesh01
Copy link

Hello Marty,
I am facing the same issue, i.e., reflection from sunlight is creating noise and depth information is not obtained. Although the application of depth camera in my case is not for 3D mapping. I am using it for obstacle detection.
In the depth color frame, the area on the floor with reflection is showed in red color so it gives false readings.
I want to know if Intel has worked on this issue later on or did they reach any conclusion regarding this issue?
Thank you for your time.

@MartyG-RealSense
Copy link
Collaborator

Hi @Sowmesh01 Intel introduced D435f, D435if and D455f camera models into the RealSense product range that are equipped with light-blocking filters on the left and right IR sensors.

https://www.intelrealsense.com/stereo-depth-with-ir/

The image below from the above link demonstrates the difference made in a scene with reflected light when using a filter-equipped D435f compared to a filterless D435.

image


If purchasing a filter-equipped camera is not an option then the type of filter used - CLAREX NIR-75N - is also purchasable separately so that it can be fitted over the lenses of a camera that does not have them. More information can be found here:

https://astraproducts.com/info-acrylic-nir-filters.asp

@Sowmesh01
Copy link

Thank you for the reply Marty.
The photo you have uploaded shows the effect under artificial light. I have an 435f and 435if models. The filters do work for reflection from artificial light, but not for reflection from sunlight. The image I have uploaded shows the issue. The areas with reflection are not providing depth data and are seen as black areas in the depth-color frame.
Screenshot 2023-08-17 113737
Screenshot 2023-08-17 113723

@MartyG-RealSense
Copy link
Collaborator

It appears that the affected areas are glass panes. Adding a different kind of filter product over the lenses on the outside of the camera called a linear polarization filter can greatly negate the reflections from glass, as highlighted earlier in this discussion at #10229 (comment)

Any polarization filter can work so long as it is linear (except for the circular lenses used in 3D glasses, which will not work), so they can be purchased inexpensively from suppliers such as Amazon by searching for the term linear polarizing sheet.

Can you also test the scene in the RealSense Viewer tool please to see if you get an improved image, as the Depth Quality Tool is for depth quality testing rather than depth capture.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants