-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Saving frames for future processing #1000
Comments
Hi @Alonmeytal std::vector<rs2::frame> frames;
rs2::pipeline pipe;
auto profile = pipe.start();
for (auto&& sensor : profile.get_device().query_sensors())
{
sensor.set_option(RS2_OPTION_FRAMES_QUEUE_SIZE, 0);
}
for (int i = 0; i < 100; i++)
{
rs2::frameset data = pipe.wait_for_frames();
frames.push_back(data.get_depth_frame());
}
for (auto&& frame : frames)
std::cout << frame.get_timestamp() << std::endl; (With latest code on the development branch this should save 100 frames) |
@dorodnic first of all thanks for the quick response! As you said the fact that this solution doesn't yet work for frameset rather limits my performance, as I would be forced keep converting the RGB image to a separate type (using python, so probably np.asanyarray as the examples show) or writing it directly to the FS instead of just pushing them to the same queue. |
Hello @Alonmeytal, did you find a workaround to save framesets? |
@Jiloc do you in-memory or to disk? |
Hello @Alonmeytal , thank you for the fast reply Actually our problem also is for in-memory, we have heavy post-processing works to do. |
@Jiloc yet again, I found that copying the frame data to some sort of multidimensional array is the easiest way to go, but, yet again, you're losing the functionality that comes with the depth frame object, and of course that "casting" is time consuming (not too time consuming, but not too neglectable). |
We discussed internally several ways to address this issue and came-up with something that should be convenient for everyone: |
Of course, there is no magic solution to heavy post-processing problem - if your post-processing takes more then |
@dorodnic thanks for the response, seems like keep() would be a step in the right direction. |
If you are using the D415, D435 or the SR300 device, the color camera is physically displaced relative to the "depth" camera. This means that pixel {x,y} in color stream will not correspond to {x,y} in the depth stream. Assuming what you want is a single RGBD image, you must find the correspondence between every two pixels and "align" the two streams. |
@dorodnic, just try to make sure I understand by using my own words: |
@dorodnic in our case we need to register a 30 seconds stream and process every frame as soon as possible. So, even if we don't release it in less then
|
Hi @dorodnic, I am testing Since it doesn't work as shall I just assign the biggest uint available
or is there a more "elegant" way? |
Hmm... Looking at the implementation (concurrency.h) there doesn't seem to be anything that would prevent you from passing some large number. |
I assume this issue is mostly resolved. |
Hello @dorodnic , |
Could you guys please provide a code example for holding frames in memory for future use.
Currently, we can't hold more than 10 Frame objects in memory (Due to the frame buffer pool), and so we have to convert them to some other object.
My use case is Sending the RGB image for ML analysis and then using the Depth image to enhance the returned output. As far as I understand it, there's no documented way of getting the depth of a pixel if i've "released" the Frame object.
Would aligning the depth-RGB before I use depth.get_data() solve it?
Would the solution implemented in #956 provide a means to "get back" the Frame object?
The text was updated successfully, but these errors were encountered: