Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RGB format issues on Jetson Nano #10358

Open
adonisyang999 opened this issue Mar 31, 2022 · 14 comments
Open

RGB format issues on Jetson Nano #10358

adonisyang999 opened this issue Mar 31, 2022 · 14 comments

Comments

@adonisyang999
Copy link

adonisyang999 commented Mar 31, 2022

Hello relasense community,
My Issue Description:
Using the same SR300 camera, When I used the 2.48 library on ubuntu18.04 jetson nano, I found that the images obtained by ubuntu and windows10 were inconsistent in color. When using Intel RealSense Viewer, the yuyv format on ubuntu is closer to the RBG format on windows10, while the RGB format on ubuntu in getson nano is more red and less blue. Also, when I use opencv to call the camera on ubuntu to get an image, I get the same results as in windows10.
1
2

@MartyG-RealSense
Copy link
Collaborator

Hi @adonisyang999 The differences between Windows and Ubuntu images may be because the default backend on the Windows version of the SDK is Windows Media Foundation, whilst the Ubuntu version uses the V4L backend by default. The backends are described in the link below.

https://dev.intelrealsense.com/docs/build-configuration#section-back-end-recommended-configuration

The Windows version of the SDK can be compiled from source code to use a backend based on a WinUSB rewrite of the UVC protocol, instead of Media Foundation. The CMake build flag for enabling this is -DFORCE_RSUSB_BACKEND=true

https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_windows.md

In the above guide for building Windows from source code, the section Enabling metadata on Windows can be skipped as hardware metadata support can be enabled much more easily by doing so in the RealSense Viewer program.


In regard to Jetson, if you have CUDA support enabled in your librealsense build then librealsense will be accelerating YUY to RGB color convesion. Jetson packages should have CUDA enabled by default, whilst a source-code build on Jetson enables CUDA by including -DBUILD_WITH_CUDA=true in the CMake build instruction.

It is also worth noting that Intel strongly recommend using the instructions in the link below to enable the barrel jack connector on Nano for extra power if your Nano board model has one.

https://www.jetsonhacks.com/2019/04/10/jetson-nano-use-more-power/

@adonisyang999
Copy link
Author

The answer is to change the backend Windows uses to fit the Ubuntu format. However, I found that the images generated under Windows and Ubuntu using OpencV have the same color, but the color generated under Ubuntu using driver version 2.48 is different. Is there a way to change the way ubuntu 2.48 builds to match the existing windows10 version?

@MartyG-RealSense
Copy link
Collaborator

In OpenCV it has been known for blue and red colors to be reversed because of its default color space, as described at #9304 - this case provides code to resolve this color issue.

If RSUSB backend is also used with librealsense on Ubuntu then instead of using the native V4L backend, librealsense will instead use a USB-based UVC protocol that is compliant with UVC 1.5.

@adonisyang999
Copy link
Author

Through my tests today, I found that in version 2.49, if I use cmake.. When I compile without adding anything, I get the same result as windows10, but when I use cmake.. /-DBUILD_WITH_CUDA=true .Cuda compilation speeds up image acquisition, but at the same time leads to chromatic aberration, which is the problem described above.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Apr 1, 2022

Comparing the upper RGB image to the lower one where there is 'more red', I wonder whether the RGB white balance setting is too strong on Nano. Increasing white balance above the default value makes the colors progressively warmer and glowing.

In the images below, I tested with an SR300 in the Viewer on Windows with the white balance default of '4600' and then at '5000'

4600
image

5000
image

You could test in the Viewer's RGB options on Nano whether disabling the Auto White Balance option so that a fixed manual default of 4600 is used makes a positive difference.

image

@adonisyang999
Copy link
Author

May I ask another question? If I want to use the program to call the data stream in YuYV format and convert it into an image in OpencV format, is there a corresponding tutorial? I have not found the corresponding method on the Internet.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Apr 1, 2022

#7275 has Python code for accessing RAW16 RGB frames from the camera hardware and converting them into usable RGB8 frames with OpenCV's cvtColor instruction. Would that meet your needs, please?

@adonisyang999
Copy link
Author

Is there any c++ code for this question?

@MartyG-RealSense
Copy link
Collaborator

#6361 discusses YUY to RGB in terms of C++. The RealSense user in that case is also using Nano.

@MartyG-RealSense
Copy link
Collaborator

Hi @adonisyang999 Do you require further assistance with this case, please? Thanks!

@haotiangao
Copy link

I met the same problem the other day, and I tried disabled auto white balance, but it didn't work out. The color on Nano didn't change ay all.I may try another way.

@MartyG-RealSense
Copy link
Collaborator

Hi @haotiangao Please do let me know the outcome of your next tests when you have performed them. Good luck!

@haotiangao
Copy link

I tried using OpenCv to grab the picture, but it seemed that pyrealsense and OpenCv cann't use the camera at the same time. I'm trying another way..

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Apr 9, 2022

By the rules of the RealSense SDK, once a program has accessed a particular stream on a particular camera then that stream is 'claimed' until the program releases that claim by stopping the stream or closing the pipeline, so a second program cannot then access that stream whilst the first program is using it. This is described in the SDK documentation's Multi-Streaming Model.

https://github.com/IntelRealSense/librealsense/blob/master/doc/rs400_support.md#multi-streaming-model

Typically when OpenCV is used with RealSense, it is accessed from within a RealSense SDK script such as C++ or Python, like in the SDK example programs below:

C++
https://github.com/IntelRealSense/librealsense/tree/master/wrappers/opencv#list-of-samples

Python
https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/opencv_pointcloud_viewer.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants