-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't merge depth frame and color frame into one frameset #6522
Comments
As a side note, I have looked up this issue on RealSense's support community, and one guy (MartyG) claimed that the reason why color frames are absent from the frameset is that the device has sacrificed them for the purpose of setting up auto-exposure (https://support.intelrealsense.com/hc/en-us/community/posts/360037090133). I don't buy this explanation, because if you call rs2::option::supports(rs2_option) on any of the software_sensor, you see RS2_OPTION_ENABLE_AUTO_EXPOSURE is not supported. In fact, if you run the following code on either of the software_sensors, you get the following output:
No other options are supported! |
Hi @FJShen Following back from the link that you provided about one of my previous answers, it is in reference to a recommendation to drop the first several frames whilst auto-exposure is stabilising. This dropping of the first several frames is not something that happens automatically and has to be programmed into a script deliberately - the user has the option not to do it. So it is a recommendation and not mandatory. However, the auto-exposure will still take several frames to stabilise even if those frames are not dropped. You have the option to use manual exposure if you want full control over the exposure value from the first frame. |
@FJShen I have not created a software device yet, so can not say anything about it. But If you want to align images at a later stage and are using the openCV then you can skip the part altogether. Step 1: Store the RGB and depth camera's intrinsic and extrinsic. I can post some code if you want. |
@FJShen, you did almost everything by the book, but
A) The timestamps are in msec, so instead of This should be enough for you to start polling RGB and Depth frames in the loop. Additionally: RGB-Depth alignment checks: depth_sensor.add_read_only_option(RS2_OPTION_DEPTH_UNITS, <0.001 ?>); |
@ev-mp |
@ev-mp I have modified my code to in light of your suggestions A, B, and E; furthermore, I used the "real" intrinsics value that I obtained from get_intrinsics() (#1706 (comment)). But it did not work... I set up a breakpoint right after the lines
and here is what I see from the debugger: According to the definition of rs2::frameset::get_color_frame() (attached below), color_frame will be NULL if neither the color stream nor the IR stream can provide a color frame. Since I did not add a IR stream to my software sensors, this implies that the syncer somehow just did not obtain the injected RGB image from the color stream.
@kafan1986 I'd like to try a pure OpenCV-method if there are any feasible solutions, but I am not very familiar with the library. Would you please share some more details on how alignment is done in OpenCV? |
@FJShen The below code is in C++ for my android application. That I call via NDK wrapper. The below function takes the depthFrame and colourFrame and stores the RGB and depth camera intrinsic and extrinsic for future use.
The below function operates on single pixel at a time, if you want to perform for entire image, you will need to loop through all pixels. Use the below function if you want to align depth to color image. xVal and yVal are depth pixel location and zVal is actual depth in metre.
Use the below function if you want to align color to depth image. xVal, yVal are colour pixel location, depthMat is the entire depth opencv Mat, depthMinMeters and depthMaxMeters gives the range of depth in which the algo will try to search for best match. Usually using depthMinMeters = 0.1 and depthMaxMeters = 10 will do fine. If your use case involves smaller distance then change these values accordingly.
Let me know if this solves your doubt. |
@kafan1986 It seems to me that the heart of your code lies in these three lines: I ended up storing the points in the form of a text file, then read the file in MATLAB and used |
@kafan1986 Thanks so much for your great contributions of help to the RealSense community! @FJShen I'm very glad that you achieved a solution. |
@FJShen , #include <iostream>
//#include <opencv2/opencv.hpp>
#include <librealsense2/rs.hpp>
#include <librealsense2/hpp/rs_internal.hpp>
int W = 640;
int H = 480;
int main() {
rs2::software_device dev;
auto depth_sensor = dev.add_sensor("Depth");
auto color_sensor = dev.add_sensor("Color");
rs2_intrinsics depth_intrinsics{ W, H, W / 2, H / 2, 383.859, 383.859,
RS2_DISTORTION_BROWN_CONRADY,
{0, 0, 0, 0, 0} };
rs2_intrinsics color_intrinsics{ W, H, W / 2, H / 2, 618.212, 618.463,
RS2_DISTORTION_INVERSE_BROWN_CONRADY,
{0, 0, 0, 0, 0} };
auto depth_stream = depth_sensor.add_video_stream(
{ RS2_STREAM_DEPTH, 0, 0, W, H, 60, 2, RS2_FORMAT_Z16,
depth_intrinsics });
auto color_stream = color_sensor.add_video_stream(
{ RS2_STREAM_COLOR, 0, 1, W, H, 60, 3, RS2_FORMAT_BGR8,
color_intrinsics }, true);
dev.create_matcher(RS2_MATCHER_DEFAULT);
rs2::syncer sync;
depth_sensor.open(depth_stream);
color_sensor.open(color_stream);
depth_sensor.start(sync);
color_sensor.start(sync);
depth_stream.register_extrinsics_to(color_stream,
{ {1, 0, 0, 0, 1, 0, 0, 0, 1},
{0, 0, 0} });
rs2::frameset fs;
std::vector<uint8_t> color_image(W * H * 3) ;// cv::imread("/home/nvidia/Desktop/images/color.jpg", cv::IMREAD_COLOR);
std::vector<uint8_t> depth_image(W * H * 2);// cv::imread("/home/nvidia/Desktop/images/color.jpg", cv::IMREAD_COLOR);
//inject the images to the sensors
for (int idx = 0; idx < 120; ++idx) {
//Purpose of this for loop: somebody said you start getting the
// color frames once you have received an ample amount of frames
// (e.g., 30 frames), when the auto-exposure has converged.
//But in my case, I have run wait_for_frames for 120 times,
// which given the 60Hz FPS, is equivalent to 2 seconds.
//Yet I am still not receiving any color frames in the frameset.
color_sensor.on_video_frame({ (void*)color_image.data(), // Frame pixels from capture API
[](void*) {}, // Custom deleter (if required)
3 * 640, 3, // Stride and Bytes-per-pixel
(double)(idx) * 60, RS2_TIMESTAMP_DOMAIN_SYSTEM_TIME, // RS2_TIMESTAMP_DOMAIN_COUNT
idx, // Timestamp, Frame# for potential sync services
color_stream });
depth_sensor.on_video_frame({ (void*)depth_image.data(), // Frame pixels from capture API
[](void*) {}, // Custom deleter (if required)
2 * 640, 2, // Stride and Bytes-per-pixel
(double)(idx) * 60, RS2_TIMESTAMP_DOMAIN_SYSTEM_TIME, // RS2_TIMESTAMP_DOMAIN_COUNT
idx, // Timestamp, Frame# for potential sync services
depth_stream });
fs = sync.wait_for_frames();
rs2::frame depth_frame = fs.get_depth_frame();
rs2::frame color_frame = fs.get_color_frame();
if (depth_frame)
std::cout << "fn depth : " << depth_frame.get_frame_number() << " ts: " << depth_frame.get_timestamp();
if (color_frame)
std::cout << ", fn color : " << color_frame.get_frame_number() << " ts: " << color_frame.get_timestamp();
std::cout << std::endl;
}
return 0;
} It produces both Depth and RGB: fn depth : 0 ts: 0
fn depth : 1 ts: 60, fn color : 1 ts: 60
fn depth : 2 ts: 120, fn color : 2 ts: 120
fn depth : 3 ts: 180, fn color : 3 ts: 180
fn depth : 4 ts: 240, fn color : 4 ts: 240
fn depth : 5 ts: 300, fn color : 5 ts: 300
fn depth : 6 ts: 360, fn color : 6 ts: 360
fn depth : 7 ts: 420, fn color : 7 ts: 420
fn depth : 8 ts: 480, fn color : 8 ts: 480
fn depth : 9 ts: 540, fn color : 9 ts: 540
fn depth : 10 ts: 600, fn color : 10 ts: 600
fn depth : 11 ts: 660, fn color : 11 ts: 660
fn depth : 12 ts: 720, fn color : 12 ts: 720
fn depth : 13 ts: 780, fn color : 13 ts: 780
fn depth : 14 ts: 840, fn color : 14 ts: 840
fn depth : 15 ts: 900, fn color : 15 ts: 900
fn depth : 16 ts: 960, fn color : 16 ts: 960
fn depth : 17 ts: 1020, fn color : 17 ts: 1020
fn depth : 18 ts: 1080, fn color : 18 ts: 1080
fn depth : 19 ts: 1140, fn color : 19 ts: 1140
fn depth : 20 ts: 1200, fn color : 20 ts: 1200
fn depth : 21 ts: 1260, fn color : 21 ts: 1260
fn depth : 22 ts: 1320, fn color : 22 ts: 1320 |
@ev-mp
And if you print fs.size(), you will discover that the output is different: with debug mode on, fs.size() is always 1; whereas when you simply 'run' the program, fs.size() is 2. Hmmm... Regardless of this strange behavior, I decided to assume that the frameset is all right, proceed to do the alignment, and save the aligned image:
What I see after loading the PNG file into Matlab is this: I think either I am not properly configuring everything in my program, or there are some uncanny details of the SDK that everyone have overlooked. For now I am just going to use the |
@FJShen - I had a similar issue to you a while ago - see #4523. |
@jb455 Thank you!!
In my case the depth unit is 0.001 meters, so we are effectively declaring that "the distance in mm between the first and the second imagers" is 0.001 meters, which clearly is not the case. Is this really not a bug from the code? @MartyG-RealSense |
Here's the code that worked out for me in the end -- in case there's s.b. who is interested to refer to it (and thanks to everybody who contributed to this issue):
|
@FJShen Great news that you found a solution, and thanks for sharing your code with the community! |
Glad to see it worked for you, though it's a long time since I raised the issue so I had hoped it would've been fixed by now... |
I will now close this case, as a solution was achieved. Thanks everybody! |
I worked on SDK 2.49. The code in the last post of @OldAccountFJShen didn't worked for me, I got only one frame, the depth.
And I got 2 frames and the align operation gave me the black depth images as a result, similarly to what @OldAccountFJShen showed in his matlab screenshot.
@OldAccountFJShen, can you please check if it is ? I hoped that this idea to sue software-device for alignment will work for me too. In addition could you post the matlab code with the code that does the matching of depth to color that you used ? Just so that be another solution, if this will not work, although I hope you just posted the wrong code or something, thank you in advance. |
@RealSenseSupport, can you please check why this isn't working anymore ? @jb455, maybe you can notice the code changes that led to this code not be working anymore ? |
@MartyG-RealSense, another interesting question, can I use realsense software_device as in this example but with frames from other cameras (not real sense at all), suppose real sense with thermal camera, where the resolutions are different but I know the intrinsics and extrinsics from calibration ? Will it work for frames different resolutions, for example real sense 1280 x 720 and other camera of 640 x 512 ? |
My understanding is that you could use RGB color data from non-RealSense cameras in software_device if you feed the frames into OpenCV first, as mentioned in point 3 of advice provided by a RealSense team member at #2249 (comment) There was also a discussion - unrelated to software_device - at #6801 about performing hardware sync between a RealSense camera and a Flir camera. #4202 and #10067 discuss performing alignment between data from RealSense and non-RealSense cameras with software_device and how the differences between the sources are adjusted for. In #4202 a D410 RealSense camera (which would have a maximum depth resolution of 1280x720 and has no RGB sensor) was being aligned with a 4K RGB resolution non-RealSense camera. |
@MartyG-RealSense, thank you very much, I'll investigate the issue. |
As mentioned here: #6522 (comment) It does not seem like this solution works as of 2.49. I'm going to try the software device example and see if that is functioning correctly. |
Issue Description
I was trying to generate a frameset consisting of a depth frame and a color frame (RGB) based on the code from example software_device (https://github.com/IntelRealSense/librealsense/tree/master/examples/software-device).
The way I did it is as follows:
Now, the problem is that although I injected both a depth image and a color image to the device, the frameset that I received from the syncer always had a size of one. It turned out that the frameset never included a color frame!
My source code will be included in the end of this post for anyone interested.
What I want to do is to align the two images. As a matter of fact, I have previously collected depth and RGB images from the camera (their were not aligned) and I stored them as JPG/PNG files in the filesystem, and they are named by their Unix-time timestamps (e.g. 1591200967.895594238.jpg). Now I want to align the depth images' viewport to that of the color images, so I am trying to use the software_device to re-merge these depth-color image pairs into framesets and use the align processing block to align them, and then save the aligned images.
In my code, I have set the timestamps of the frames to be [idx]/60, and the frame-numbers are set to be [idx], where idx=1..120. Given the fact that the timestamps of these depth-color frame pairs are perfectly synced (identical, to be precise), the syncer should have recognized it and grouped them into a single frameset. But I never see a color frame in my frameset! This is so perplexing, can anyone give me an explanation?
The text was updated successfully, but these errors were encountered: