-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to align frames from two different sensors' pipelines? #10338
Comments
Hi @YoshiyasuIzumi It usually is not necessary to put the depth and color streams on separate pipelines. In Python, two separate pipelines for different sensors would typically be used if IMU streams were enabled in addition to depth and color, with IMU on one pipeline and depth + color together on the other pipeline. The RealSense Python wrapper has an example Python alignment program called align_depth2color.py that you could test if you have not done so already to see whether it meets your needs. In your sleep instruction
Please note that I changed your code's 'wait_for_frame()' to 'wait_for_frames(), as wait_for_frames() is the correct name for the instruction. |
Hi @MartyG-RealSense My goal is to get aligned color and depth images without skipping frames. I don't mind whether single pipeline or multiple pipeline is used. Do you have any advise for this situation? And one more thing, in my case, I don't use auto exposure feature and this sleep timer is for emulate disturbance. And if I change "wait_for_frame" to "wait_for_frames". I got error attached one. Thanks! |
Ah, I see that you are using wait_for_frame() in terms of a frame queue rather than a pipeline. That makes sense. Thanks very much for the clarification. Setting a custom frame queue size instead of using the default size of '1' is permitted but not recommended, as you can accidentally break the streams in doing so. The SDK's official frame buffer management documentation recommends a queue size of '2' if using two streams (such as depth and color). |
Thank you for the inputs. I want to understand your advice correctly. Method 1. Single stream with frame queue for color and depth Please see this reference Method 2: Two streams with two frame queues for color and depth Please see this reference also To avoid frame drop for color and depth, I thought the method 2 is recommended from this comment. But do you suggest doubling the size of queue for single stream (for color and depth) with method 1? Thank you for your help! |
When you say 'single stream' or 'two streams', I think that 'pipeline' would be the right term. The stream is the type of data that is being provided (depth, color, etc). A single pipeline with two stream types (depth and color) and a custom Frame Queue Size set to '2' would be the most common implementation for those who have more than one stream enabled wish to set their own custom frame queue size. It is not the only way to implement a custom frame queue size though, of course. It depends on which method works best for you and your particular project. Method 1 represented a script that didn't work correctly for the RealSense user that created it, whilst Method 2 was the script that did work for them. So I would suggest trying Method 2 and if it works for you then see if you can adapt it for your own project. You probably do not need to adjust the frame queue size at all though. |
Thank you for the correction! As you pointed out pipeline would be an appropriate term. |
I could not find a reference for putting color and depth on two separate pipelines, unfortunately. An alternative to having two separate pipelines is to use callbacks, as described at #5417 |
I see... If you get any idea to handle this case, please let me know!
Thank you for sharing useful reference. Related to your picked up issue, I found approach with rs.frame_source.allocate_composite_frame. I try to make it run but I encountered frame timeout error same as comments below. I checked your advice but I couldn't manage... |
I researched the issue further but didn't find anything that would help. If your aim is to reduce frame drops, an alternative approach to improving performance with scripting may be to instead build the librealsense SDK from source with CMake and include the build term -DBUILD_WITH_OPENMP=true. This will enable librealsense to automatically take advantage of multiple cores on the computer's CPU when performing depth-color alignment. This may reduce the chances of maxing out the CPU's usage % when processing the alignment operation on a single core and so increasing the risk of performance being negatively impacted because of that heavy processing burden placed on the CPU. |
I see... |
Research on this issue would stop once the case is closed. So it may be worth keeping the case open for a further time period and first trying the OpenMP build option, and if that does not resolve your problem then let me know and I can look at other possible approaches to resolving your frame drops. For example, you mentioned earlier in #10338 (comment) that you do not have auto-exposure enabled. If manual exposure is set in a certain range that exceeds an 'upper bound' then it can cause the FPS rate to lag, as discussed at #1957 |
Hi @YoshiyasuIzumi Was the information provided in the comment above helpful to you, please? Thanks! |
Hi @MartyG-RealSense
|
Normally, the advice would be to use a frame queue size of '2' when using depth + color. Your sleep timer to emulate disturbance may be causing frame drop problems that would not otherwise occur though. Could you try setting the frame queue to '2' and commenting out the two lines of the sleep instruction to see what difference it makes?
|
@YoshiyasuIzumi Could you please share the final code where you solved the queue and align together? Appreciated. |
Hi @MartyG-RealSense and @Andyshen555 |
@YoshiyasuIzumi The Python case #7067 may be a useful reference. It looks at inconsistent performance from the frame queue when retrieving and aligning frames, and tries using the RealSense SDK's Keep() function to solve the problem. Keep() stores frames in the computer's memory instead of writing them to the computer's storage. When the pipeline is closed, you can then perform batch processing on all the stored images from that pipeline session at the same time (such as alignment and post-processing) and then save all the frames to file in a single action. A disadvantage though of Keep() is that because the stored frames consume computer memory, it is suited to short-duration processing sessions such as 10 seconds on a low-end computing device or 30 seconds on a higher-end computer such as a PC. |
@MartyG-RealSense Thank you for advice. Let me confirm my understanding. According to your comments, to access kept frame by Keep() function, the pipeline needs to be closed, right? So if I want to process frames while streaming, the Keep() function seems not to be appropriate, right? If it does, please add your insight. And I couldn't find frame_queue method in the code in 7067. And keep method seems to be called after pipe.wait_for_frames() method. I assume we use frame_queue with keep_frames=True in order to avoid frame drop with pipe.wait_for_frames(). Then, frame_queue seems to work with color or depth, but the frame_queue for color and depth, we observed frame drop. That is the situation from my view point. |
Correct, you need to close the pipeline before processing the set of frames that are stored in memory. #7067 does not have a frame queue size changing reference. The script in it that I was referring to acts as an example of using Keep() with alignment. If this script performed well then you may not need to change the frame queue size.
In regard to frame drops, RealSense cameras and the RealSense SDK are used with so many different hardware and software configurations that it is likely that it would not be possible to develop a single 'one size fits all' solution that guarantees that frame drops cannot occur. |
@MartyG-RealSense
|
A RealSense team member advises at #5041 (comment) that the frame queue size controls the total number of frames in circulation across all queues. So the higher the frame queue number, the greater the number of frames that can be in the queues at the same time. As the frame queue size is increased, latency also increases but the risk of frames being dropped reduces. |
Hi @YoshiyasuIzumi Do you require further assistance with this case, please? Thanks! |
Hi @MartyG-RealSense , Let me confirm this thread summery.
[Side note]
If you have any correction, please let me know. |
There is no official guidance for a frame queue size other than the '2' is recommended by the frame buffering management documentation for a 2-stream setup. I have also seen '10' and '50' used in an official SDK example script, similar to how you are using 50 in your own script. Aside from that, it may be a matter of trial and error testing to find a value that provides the stability that you require without breaking the streams. To be honest, I do not have enough knowledge of the specific subject of frame queue programming to provide confirmation or correction of the information in #9022 - however, ev-mp is a senior Intel RealSense team member and leading expert in RealSense programming, so certainly any statements that are made by him can be accepted as completely reliable. |
Hi @YoshiyasuIzumi Do you require further assistance with this case, please? Thanks! |
@MartyG-RealSense |
I do not have any further information to offer on this subject at this time. RealSense community members reading this case are welcome to leave comments to share their own knowledge though. As suggested, I will close the case in the meantime. Thanks again! |
Is there in the meantime a solution available to buffer frames via queues AND align the buffered frames afterwards? |
@maxstrobel A comment at #10042 (comment) about storing frames in memory with the Keep() instruction, and the response beneath it, might provide useful insights. |
@MartyG-RealSense Thanks for the quick response. I looked into #10042, #1000 and #6146. However, I don't get if keep is really the thing we need. I have a realtime stream of data, both RGB and depth. Most of the time my post-processing is fast enough to keep pace with the data acquisition. However sometimes, I encounter some missed frames, because the post-processing took too long or there was some other high load on the machine. Is there an option to buffer the data, RGB and depth, with a queue AND align the buffered frames afterwards? |
@maxstrobel If you are using Python then multiprocessing could allow you to do two separate operations simultaneously, as described at #9085 Though if the processing bottleneck is occurring at your post-processing filters then it would be logical to improve performance by optimizing the filters. If you are using the Spatial filter then I would recommend removing it, as the Spatial filter has one of the highest processing costs for the least benefit. |
@MartyG-RealSense - If I use multiprocessing or other approaches, where I get separate frames for the individual sensors, e.g. depth and RGB, is there any possibility to align the frames afterwards? |
If you store the frames in memory with the Keep() instruction then you can perform batch-processing actions on all the frames at the same time at the end after the pipeline has been closed, such as saving the frames to file, post-processing them or aligning them. |
Issue Description
I encountered frame missing issue with frame queue with depth and color streaming. This problem seems to be handled in separate frame queues in this post.
But I tried to align color and depth with align.process, I couldn't apply align for the frames.
Please give me advice to align color and depth based on two separated streamed frames?
Thanks!
The text was updated successfully, but these errors were encountered: