-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
D435 running on high-reflective ground gives some mistakes depth data #10229
Comments
Hi @MichaelPan0905 Negative implications for depth readings when observing a reflective floor surface is a known phenomenon. The negative impact on the depth image of glare from reflections can be significantly reduced by applying an external filter called a linear polarization filter over the lenses on the outside of the camera. Section 4.4 of Intel's white-paper document about optical filters provides more detail about this. The image below, taken from that section, demonstrates the difference that the filter can make to glare reduction when applied. If you require a scripting-based solution for sensing a reflective surface then you could try aligning depth to color, like in Intel's Python example depth_under_water: https://github.com/IntelRealSense/librealsense/blob/jupyter/notebooks/depth_under_water.ipynb In a past case involving reflectivity, a RealSense team member also advised: "In extreme case where you have brightly illuminated surfaces and really dark surfaces, it is impossible to find one value of sensor exposure without having either completely black or completely white regions, but for this case, the camera does offer a control to rapidly switch between two values of exposure". You can test in the RealSense Viewer whether alternating the emitter makes a positive difference by enabling the Emitter On Off option under Stereo Module > Controls In regard to your second question, it may be worth trying the alternating emitter option with the floor signs to see if it provides improved results for investigating further possible fixes. |
Thanks for quick responding, I'll try these methods later. |
Your depth image looks okay at the start in the comment above, with its color progressively shifting from blue at the closest point to the camera to red at the furthest distance (which is the expected default behaviour of the Viewer for colorizing the depth values according to distance from camera). Then the image deteriorates in the subsequent pictures. Given that you describe the depth data as becoming unstable or vanishing in your original message, please try disabling the Viewer's two GLSL options in its settings interface, as they can cause such vanishing of the 3D point cloud on some computers when they are enabled. Instructions for disabling the GLSL options can be found at #8813 (comment) |
Besides, I said the depth data as becoming unstable or vanishing, not just looks like, but truely affects using. I wonder that disabling those GLSL options only affect the visual effects but does no help to the actual points we get, is that right? |
Depth frames are constructed from the camera's left and right infrared frames. I would speculate that if the infrared frames are viewing the bright white areas as plain and textureless due to the glare from reflection then it is mis-reading the depth because there is no texture on the surface for the camera to analyze for depth information. Ordinarily in such a situation, projecting the infrared dot pattern from the camera's projector onto the unreadable surface would make it analyzable, but the light conditions on the surface may be making the dots invisible to the camera. In regard to GLSL, a RealSense user did a comparison of the point cloud image with and without GLSL at #10005 |
Hi @MartyG-RealSense , I've tried disabling the GLSL options, but I'm not sure that there is some difference. |
Let's take another look at the RGB scene and analyze its detail. Whilst the surface is glossy, the RGB image suggests that is not hugely reflective like the tiles of an indoor office floor and there is texture on the ground that should be analyzable for depth information by the camera - it is not like a plain white wall with low surface detail. The infrared image tells a different story though. As discussed earlier, on that image the floor detail seems to be completely overwhelmed by the large overhead illumination, leaving a plain white area with no analyzable texture. This could account for why the depth's color shading gradient starts off correctly at first and then sharply changes in the solid red representing far-distant depth values. The size and strength of the overhead lighting may be overwhelming for the camera. Does the image significantly improve if you map the RGB data onto the depth points? This can be done in the RealSense Viewer by first enabling Depth and then secondly enabling RGB, and the RGB should automatically map onto the depth points to create a depth-color aligned image. Alignment can help the camera to more accurately distinguish between the foreground and background. If you are not able to map RGB onto depth in your particular project, does the image improve if you go to Stereo Module > Post-Processing, expand open the list of post-processing filters and enable the Hole-Filling filter by left-clicking on the red icon beside it to change it to blue (On). |
Thanks for your patiently responding.Those two things you've mentioned, RGB mapping and hole-filling, actually I've tried long time ago, and my impression is that they have no help. The RGB mapping can't fill such a big hole, and neither the Hole-Filling, while the latter actually made some more noises at the edge of the hole. |
I carried out some general non-RealSense research of using cameras to capture reflective epoxy floors. Suggestions included not projecting a light source onto the observed surface at the same angle that the camera is facing, and not to project an infrared light source. The camera's projector and its IR Emitter does both of these. So as 400 Series cameras can use the ambient light in a scene to analyze surfaces for depth detail as an alternative to analyzing the projected dot pattern, it may be worth trying disabling the projector in the RealSense Viewer if you have not tried it already. This can be done either by setting Laser Power to '0' or by setting the Emitter Enabled drop-down menu to 'Off'. Turning off the projector in dim lighting conditions has the disadvantage of noticably reducing the quality of the depth image. If there is strong lighting in the scene though then the impact should not be so bad, as the camera can make use of any visible or near-visible light source. It may not be a suitable solution if the scene relies on sunlight from the roof instead of a consistent artificial source such as the overhead ceiling strip lights that are off to the side of the image though. It is possible to accompany the camera with artificial light sources such as an IR illuminator lamp, which may not have as much of a negative impact on the image if they are casting their light from a different angle to the angle that the camera is pointed at. |
Hey Marty, thanks for responding. realsense-2022-02-19_17.01.47_edit.mp4realsense-2022-02-19_16.30.40_edit.mp4And for comparison, below is the same scene of the camera in the second video, while the only difference is the lights were turn off: realsense-2022-02-19_16.32.33_edit2.mp4I thought it's only the material of the floor that affects the camera's data, now it does have sth to do with the light, while that is to same, the origin two questions I've submitted may be actually the same one. |
In the image of the Emitter Enabled drop-down menu that I posted above, there is an option called Laser Auto. My understanding is that this enabled the camera to decide whether to turn the IR emitter on or off depending on the current lightng conditions. Intel 'deprecated' this feature as far back as 2018, as mentioned in #1793 (comment) - usually, deprecated doesn't mean removed but instead "we don't recommend using this as it may be removed in the future". So you could try the Laser Auto option in different lighting conditions to see whether it turns the emitter on or off depending on the current lighting level. Under most circumstances that I have tested in, using Laser Auto has the same effect as setting the emitter to Off but I have not tested it in near-dark or total dark conditions. An alternate approach to using Laser Auto that one RealSense used was to continously monitor the exposure metadata values, using a method described by a RealSense team member shared at #1624 (comment) There was a case with a tiled foor at the link below that had a similar pointcloud to yours. In their case, using a linear polarizer filter helped a lot (you have already tried these filters), though they apparently didn't have the variable lighting level problem in the indoor location that they used the filter in. https://support.intelrealsense.com/hc/en-us/community/posts/360043612734-Issue-with-tiled-floor |
Well…I don't know why can't I upload new videos, even kept less than 10MB. Uploading images also failed. |
Done,below two screen records are what methods I've tried. realsense-2022-02-23_12.10.18_part1.mp4realsense-2022-02-23_12.10.18_part2.mp4 |
Depth cameras in general (not only RealSense) have difficulty reading depth detail from dark grey and black surfaces because it is a physics principle that those color shades absorb light. The darker the color shade, the less depth information that is returned. If the emitter is turned off then there needs to be a strong light source present so that the camera can use the light instead of the projected dots to analyze surfaces for depth detail, otherwise the depth image quality worsens. A test I conducted in the past week with depth exposure found that - in the scene I was testing in, at least - the depth image remained relatively intact when reducing it as far down as '1000', but below that the depth image progressively broke up more and more as depth exposure was reduced further towards zero. Intel's camera tuning guide has a section at the link below about using the camera in strong sunlight. It suggests defining an auto-exposure region of interest (ROI) in the lower half of the image and or reducing manual exposure to near 1 ms. |
Hi @MichaelPan0905 Do you have an update about this case that you can provide, please? Thanks! |
@MartyG-RealSense Sorry for not answering you timely! These days I've got many other works to do so I didn't make time to do the test. I've try it next week. Thanks for your attention for this. |
You are very welcome. I look forward to your next report. :) |
Hi @MichaelPan0905 Have you had the opportunity to perform further testing regarding this case, please? Thanks! |
As there have been no further comments for the past month, I will close this case for the moment. You are welcome to re-open the issue at a future date when you are ready to resume. Thanks again! |
Hello Marty, |
Hi @Sowmesh01 Intel introduced D435f, D435if and D455f camera models into the RealSense product range that are equipped with light-blocking filters on the left and right IR sensors. https://www.intelrealsense.com/stereo-depth-with-ir/ The image below from the above link demonstrates the difference made in a scene with reflected light when using a filter-equipped D435f compared to a filterless D435. If purchasing a filter-equipped camera is not an option then the type of filter used - CLAREX NIR-75N - is also purchasable separately so that it can be fitted over the lenses of a camera that does not have them. More information can be found here: |
It appears that the affected areas are glass panes. Adding a different kind of filter product over the lenses on the outside of the camera called a linear polarization filter can greatly negate the reflections from glass, as highlighted earlier in this discussion at #10229 (comment) Any polarization filter can work so long as it is linear (except for the circular lenses used in 3D glasses, which will not work), so they can be purchased inexpensively from suppliers such as Amazon by searching for the term linear polarizing sheet. Can you also test the scene in the RealSense Viewer tool please to see if you get an improved image, as the Depth Quality Tool is for depth quality testing rather than depth capture. |
*This problem is not only for this camera, but for all D435/D435i/D455 cameras we have.
Hello Marty,

I've got other problems when using realsense cameras on some kind of high-reflective ground like tiled floor or epoxy floor.
an example of epoxy floor
There are two main issues that I need help.


First is the depth data on high-reflective ground becomes unstable or even vanished, typically when the camera is fixed with a big angle to the ground.
there are servial conditions when running on such high-reflection ground (the ground is a plane):
situation of data unstable
situation of data missing
the situation of depth data missing by watching in realsense-viewer, camera is about 90° to the ground and the position of red arrow is usb
You know if we want to use the depth data to detect some tiny obstacles or cliffs, we need the camera's data to be stable and complete.So I wander if there some way to deal with this kind of situation? Thanks.
Second is some advanced edition of the first one. We met with an environment like this: the road of underground part lot is epoxy flooring, an there are many traffic signs painted by white paint on the road, and because of the skylight there are servial parts of the road is straightly exposured under the sunlight.Like the picture below:

a road in underground parking lot with sunlight
When the camera met with the white trafic signs exposured under the sunlight, there would be terrible mistakes of the depth data, like those wihte lines and arrows in picture above.
though points in red circle is where the wihte paint arrow should be.
We found that this has sth to do with the camera's exposure and posture.
Normally when we use auto-exposure, this problem will occurred when the camera met the traffic signs. But if we disable the auto-exposure and set the exposure to a small one (like 500), this mistake will disappear, that there will be no hole or curve. But the depth data also becomes unstable and there are many many noise points near the ground.
And the shape of wrong depth data is related to the posture of the camera. If we call the posture in the picture below is "the right posture":


then under this posture, there will be a concave like below:
it is under the ground.
but if we turn the camera 90° to the right or left(in order to fix on the side of the robot), then we will get these result:


the position of red arrows is the pose of usb on camera.
How could I deal with this kind of situation?
Thanks!
The text was updated successfully, but these errors were encountered: