-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Strange Frame-set Behavior when pulling multiple streams (depth,infrared) simultaneously #8067
Comments
Hi @Decerguyer The depth is perfectly aligned to the left infrared sensor by default. The depth origin point of the camera coordinates is the center of the left IR sensor. The same frame number may occur more than once if a frame is dropped, causing the SDK to return to the last good frame from that stream. If you are generating depth and IR for the purposes of side by side analysis on a video: since left IR is perfectly synchronized with depth by default, it may be worth dropping right IR (index '2') from the script and just having depth and left IR side by side with the assumption that they are synced (as they are generated by the same sensor component in the camera). Having one less active stream would also reduce the processing load on the Pi. My understanding is that building with -DFORCE_LIBUVC=true is not wrong, just deprecated (a feature that may be removed at an unknown future date, but is not always removed and may just stay marked as 'deprecated' forever). So if it works for you (and the reliable Acrobotic guide does recommend it too) then by all means use it instead of RSUSB. |
It sounds as though you are using a Raspberry Pi 4 if you are able to set the stream to 848x480 and 30 FPS (as Pi models before 4 only have USB 2.0 instead of USB 3). Can you confirm if this is correct, please? I have not seen frame-number checking performed using your method before, so it is difficult for me to debug as I do not have a reference of a similar project to compare it with for best-practices. A typical way to check frame numbers would be to record a bag file in the RealSense Viewer program using the Record button at the top of the options side-panel and then check the frames after the bag file has been recorded. You could also open the debug console of the RealSense Viewer by left-clicking on the ^ arrow in the bottom corner of the Viewer window and then start the depth and IR stream and observe the 'Frame Drops Per Second' panel in real-time. If you would prefer to program your own solution, it may be worth accessing and printing the frame counts of the multiple streams using callbacks, as the C++ SDK example rs-callback does. https://github.com/IntelRealSense/librealsense/tree/master/examples/callback You should be able to access hardware metadata with the RSUSB installation method,. I researched the subject and was reminded of a case from August 2020 with similar circumstances to yours (a Pi 4, Raspbian Buster and librealsense built with RSUSB = On. The RealSense user in that case was not able to receive hardware metadata either and eventually switched from Pi to a laptop in order to access hardware metadata. |
Thank you for the response @martyg. First off I would like to confirm that yes, I am using a raspberry pi 4, with USB 3.2. Secondly, I took your advice of recording and inspecting on the viewer and it showed some interesting results. Perhaps these results will better illuminate the problem we are having as it is quite specific. Below are 2 bag inspections, 2 images recorded using the MacOS, and 2 recorded using the Raspberry Pi 4. Notice how in the first two images, the MacOS Infrared and Depth inspections, the frames are aligned. The timestamp of the bag file are aligned for both the infrared and depth streams, and the timestamps of the specific frames are the same. In the two images showing the results from the Raspberry Pi, the streams seem to be asynchronous, with the depth stream lagging behind in both the bag file timestamps, and the actual frame timestamps.
Again, the timestamps are aligned as the frames are arriving at the same time. Keep in mind everything was recorded using the realsense viewer. Below appears the data from the Raspberry Pi streams: It is clear from the last two images that there is some sort of problem with the synchronization when using the pi, as the time stamps do not line up as opposed to when using MacOS. Both recordings were done using the viewer, and thus I expected similar results. But based on the data it is clear why my script posted earlier was behaving in the way that it was. That being said, I would like to find a reason, or a fix, for this problem so that I can get on with the end goal. I want to be able to pull the infrared(1) and matching depth stream and be able to pull data simultaneously in the form of frame sets for image analysis. Given the unpredictable nature of the streams on the Raspberry Pi as demonstrated above, this is not possible at the moment. How can I fix this? Thank you for the responses |
I do not have knowledge of metadata operation on Raspberry Pi and Raspian to be able to suggest a corrective course of action. As Linux kernel and metadata handling improvements were included in the new 2.41.0 librealsense released today, if you are not dependent on Raspian then perhaps you could install Ubuntu as the OS on the Pi and build librealsense from source code with kernel patching (i.e libuvc backend / RSUSB is false). https://github.com/IntelRealSense/librealsense/wiki/Release-Notes#bug-fixes-and-enhancements |
Thank you MartyG. |
I have since tried two versions of Ubuntu, and it had some mixed results. First, I installed the most recent 20.10 with kernel 5.8. This seemed to fix the synchronization problem between the depth and infrared stream. It did not, however, allow timestamps to be read back from frames. I then switched to Ubuntu 20.04 (since it is an LTS version) with kernel 5.4, which is supposed to be supported by the most recent Realsense SDK. I used the LTS kernel patch before building. The synchronization problem was again resolved, but frame metadata could still not be accessed. I did not build using any special flags such as LIBUVC or RSUSB since I was under the impression that there were kernel patches that fixed the issue. The main point here being that the depth and infrared streams work as they should, but timestamps are still unavailable (both of which are required for this project). Thanks |
Apologies for the delay in responding further, as I was carefully considering your case. Would it be possible please to turn off global time for the depth sensor in your C++ script by setting RS2_OPTION_GLOBAL_TIME_ENABLED to '0' and see if that makes a difference? depth_sensor.set_option(RS2_OPTION_GLOBAL_TIME_ENABLED, 0.f); |
MartyG, I apologize, I need to make a correction. Upon further review it seems as though the kernel patch never actually executed. I received a fatal error "fatal: You need to specify a tag name". Looking through issues on the GitHub, it seems as though the only fix for this is to bypass the kernel completely with RSUSB_BACKEND, which would defeat the purpose of changing to ubuntu LTS. It should be noted, however, that RSUSB_BACKEND has yielded some success using Ubuntu 20.04. 90 FPS with 848x480 seemed to run smoothly and synchronously. 300 fps with 848x100 (which is fewer pixels per second than 90 FPS with 848x480) does not, and the depth/infrared streams are seemingly independent of each other. Ideally, I would like to be able to patch the kernel such that RSUSB_BACKEND does not need to be utilized, especially since soon a second camera will be implemented. Is there any way to fix the patch error I received? |
I extensively researched your problem with needing patching for hardware metadata but could not find many options. Here are a couple of suggestions:
https://docs.microsoft.com/en-us/windows/uwp/porting/apps-on-arm
|
Hi @Decerguyer Do you require further assistance with this case, please? Thanks! |
MartyG, It is unfortunate that the solutions you have offered do not suit our development goals. That being said, we appreciate your assistance with this matter. |
You are very welcome @Decerguyer - I am sorry that I could not be of more help on this particular occasion. |
Case closed due to no further comment received. |
| Camera Model | D435 |
| Firmware Version | 05.12.09.00 |
| Operating System & Version | Raspbian Buster |
| Kernel Version (Linux Only) | (5.4.79-v7l+) |
| Platform |Raspberry Pi|
| SDK Version | { legacy / 2.. } |
| Language | C++|
Issue Description
The main issue that we have been encountering, is correctly streaming both infrared streams as well as the depth stream. What seems to be happening is that the frames are not packaged into frame sets correctly. When trying to pull a frame set from the pipeline (which as far as I understand synchronizes the infrared and depth frames into frame sets under the hood), the result ends up pulling depth and infrared images with the same timestamp as separate frames. To best describe the problem, I have attached some simple code below, and will demonstrate some outputs showing the symptoms of the issue.
The code below is a simple frame drop tester that is meant to pull 1000 frames, then as a post processing action, print out the frame # and the frame timestamp:
The expected output (for this FPS) should be numbers 1-998 (as it is taking the difference in time stamps) with a number around 34000 (34000 microseconds) which is the appropriate difference in time stamps for this FPS.
Indeed, when only running one stream (commenting out the infrared stream declarations), this is exactly the output.
When running multiple streams, it seems as though the frames were not correctly grouped into frame sets, and when pulling them from the pipeline, frames with the same timestamp were pulled several times (not necessarily only 3 times). Further investigation on the github support suggested rebuilding with -DFORCE_RUSB=true instead of -DFORCE_LIBUVC=true.
Once this was done, the issue was still not resolved, and trying to pull the timestamp data yielded the result: "Realsense Error calling rs2_get_frame_metadata(frame:0x16b4848, frame_metadata:Sensor Timestamp): metadata not available"
Note that this error did not occur with LIBUVC. Regardless, the program can still function with only the frame counter metadata and not the timestamps, so the lines related to time stamps were commented out. The resulting output is as follows:
While running 848x480 at 30 FPS
and
While running 848x100 at 300 FPS
Note the doubled and occasionally tripled frame number repeats (should be a new frame number each time as the frames are coming in from the hardware)
The goal of the project is to be able to pull 3 synchronized frames from the left and right IR imagers, and the depth streams and analyze the video. Clearly, the pulling of the frames is working incorrectly in the test script supplied above.
Essentially the two issues that I would like solved is the extremely strange and inconsistent behavior of the frame sets shown above, and the inability to pull timestamps after switching to RUSB_BACKEND.
Some important notes are that the script supplied above works perfectly fine on my Mac OS as well as my colleagues Mac OS. This leads me to believe that it is a problem with the Linux architecture of the pi. To be clear how I installed the librealsense on the pi, the installation guide used is linked below, created by ACROBATIC:
https://github.com/acrobotic/Ai_Demos_RPi/wiki/Raspberry-Pi-4-and-Intel-RealSense-D435
(The python stuff is unimportant since I am writing in C++)
(Upon rebuilding, instead of ACROBATIC's use of LIBUVC, the RUSB_BACKEND flag was used instead)
Any support on this issue would be greatly appreciated
The text was updated successfully, but these errors were encountered: