-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Verify that the kernel 6.2.8 with no realsense
string
#12581
Comments
Hi @evallhq Have you installed from source code during the installation process? Although the distribution_linux.md instructions begin with the Configuring and building from the source code section, the intention is that this section should be skipped over and you should begin at the Installing the Packages section that you linked to. |
Thanks for your reply. Yes, I didn’t install from the source code. I just started at the Installing the Packages part and followed each command. |
If you start the depth stream in realsense-viewer and overlay metadata information by clicking on the icon highlighted by a white arrow in the image below, does the Clock Domain line say 'System Time' or something else? If it says System Time, this would indicate that support for hardware metadata is not enabled. The kernel patch that is built into the DKMS packages usually provides that metadata support. |
Thanks for your reply. I followed your idea that the the Clock Domain line say 'Global Time' just like the screenshot below. |
If the clock domain is Global Time then this indicates that hardware metadata support is enabled. Your kernel is therefore likely okay and patched for RealSense despite the RealSense string not displaying. |
Okay got it, Thank you very much! |
Hi @MartyG-RealSense, I am very new to this camera, and I am also trying to use the camera on a Ubuntu machine. The firmware version for me is 6.2.0, and the camera is D435i. I was trying to follow the "building from source" procedures. However, I found the use pre-build page specifically updated about the new supports to FW 6.2. Then I just deleted the sources files and try to follow the setups in this pre-build page. I also couldn't see the "realsense" string but could see the "Global time" following the procedures. Does this mean I could now write the codes and include the head file, and I am using C++, and test the camera. |
Hi @FANFANFAN2506 The camera can be used even if the clock domain is not Global Time. The difference between Global Time and System Time (which is used when hardware metadata support is not enabled) is that System Time is based on the internal clock of the computer. More information about Global Time can be found at #3909 C to C USB cables tend to have more problems when used with RealSense cameras than USB Type C (A to C) cables. The camera can operate in USB 2.1 mode but the data transfer speed of USB 2.1 is slower and the number of supported resolution / FPS modes supported on a USB 2.1 connection is limited compared to USB 3. Causes of the connection being USB 2.1 could be if the camera is plugged into a USB 2.1 hub or a USB 2.1 port on the computer, or if a self-chosen USB cable that is being used instead of the official cable is a USB 2.1 one instead of USB 3 (as USB 2.1 cables lack extra wires that enable a device to be detected as USB 3). In regard to helpful links, the ones below may be useful. White Paper guides Official RealSense YouTube channel Official RealSense blog articles Menu-driven searchable version of C++ API |
Hi @FANFANFAN2506 Do you require further assistance with this case, please? Thanks! |
Hi Marty, Thank you for your help, I don't have any question about the installation on my current kernel version. |
Hi @FANFANFAN2506 If you are using a single camera then using wait_for_frames() will usually be best because of the benefits of doing so in regard to keeping different stream types relatively synced. On the IMU streams, each IMU data packet is timestamped using the depth sensor hardware clock to allow temporal synchronization between gyro, accel and depth frames. When depth and color FPS is the same then sync between the two streams should also automatically kick in. A way to help ensure that the FPS of both streams is constant is to have auto-exposure enabled but disable an RGB option called Auto-Exposure Priority. This causes the librealsense SDK to attempt to enforce a constant FPS speed for both streams instead of permitting FPS to vary. In regard to syncing timestamps later, you could consider syncing the depth and RGB streams using the Time of Arrival type of timestamp, as described at #2186 |
Hi @MartyG-RealSense, Sounds good, using the wait_for_frames() is very straightforward. |
Instead of constantly writing frames to disk, you can instead use the Keep() function of librealsense to store the frames in memory instead and then perform batch-processing on all the frames simultaneously when the pipeline is closed. For example, applying post-processing and alignment to the frames and then saving them to disk. The main limitation of Keep() is that storing the frames in memory progressively consumes the available memory capacity of the computer over time. So unless frames are released to free up memory space, you may be able to only store 10 seconds worth of frames on a low-end computing device or 30 seconds on a PC with plenty of memory. An alternative to Keep() for improving performance could be to increase the frame queue capacity so that librealsense can hold a greater number of frames in the pipeline simultaneously (by default, up to 16 frames of each stream type can be held in the pipeline simultaneously and the oldest frames drop out of the queue like the end of a conveyor belt). This can also cause a greater amount of available memory to be consumed, though likely not using it up as fast as Keep() would. |
Thanks @MartyG-RealSense for you continuous and fast help! Will try to look at those and test it afterward. However, the time that keep() could hold for general memory size isn't that perfect for me to use. However, I am quite curious, how do people usually use the camera, I am assuming recording a period of time frames seem normal, maybe it is because saving into the disk would be way slower than other operations invovling memory. No further questions for now. Thanks a lot! |
You are very welcome! As long as the camera's internal temperature remains within the recommended maximum range (officially 35 degrees C but more like 42 degrees in practice) then it is capable of running indefinitely so long as the computer or USB equipment does not experience a problem. When recording to disk though, the recording duration will be limited by the amount of available drive storage space. The access speed of the computer's storage drive can act as a bottleneck during recording if the drive's speed is not fast enough to keep up with the rate that the computer is attempting to write data to it. |
Hi @evallhq and @FANFANFAN2506 Do either of you require further assistance with this case, please? Thanks! |
Hi @MartyG-RealSense, Thanks for your follow-up. I have figured a way to store the figures as soon as I received the frames. Additionally, I am also thinking record the imu data such as gyro and acceleration while recording the RGB and depth frames. However, I believe they are streaming at different frequency, so I am assuming set them at the same pipeline and use wait_for_frames will cause imu be recorded at the same frequency of RGB and depth which is much slower than what it can do. I wonder if there is some official functions that could help in this. My alternative solution would be using C++ multi-threading to solve the problem. Please provide with any suggestions or examples. Thank you so much. |
Hi @FANFANFAN2506 Each IMU data packet is timestamped using the depth sensor hardware clock to allow temporal synchronization between gyro, accel and depth frames. So the frequency of the IMU compared to depth / RGB usually is not something that needs to be concerned about. Streaming depth, RGB and IMU simultaneously can cause problems though that do not occur when only using depth + RGB or IMU on its own. The solution for this in the Python language is to create two separate pipelines, with depth + RGB on one pipeline and IMU on the other. The best example of such a script is at #5628 (comment) However, I note that you are using C++. In that language a different approach of using callbacks is required. An example of a script for doing so can be found at #6426 |
Hi @MartyG-RealSense, Thank you so much on your support, the links your provided are indeed very helpful, and I am also thrilled to know that Realsense support team has already solved the previous D435i problem mentioned in the post. |
@FANFANFAN2506 I do not have information about your integer / decimal question, unfortunately. It may be worth studying the RealSense SDK's rs-data-collect C++ example program, which accesses depth and color and also additionally the IMU if the camera is equipped with one. https://github.com/IntelRealSense/librealsense/tree/master/tools/data-collect |
Hi @FANFANFAN2506 Do you require further assistance with this case, please? Thanks! |
Hi @MartyG-RealSense, Thanks for your help, I want to ask two questions about the frames captured:
|
|
Hi @MartyG-RealSense, thanks for your continuous help. I realized that I need to disable the auto-exposure, not only because the abnormal frames, but the use of the images. However, the code examples I found are required to start the pipeline first, and get the sensor and set the auto exposure. I am currently using the callback function to stream 4 different types of frames, so I assume the camera will start to work as long as the |
Instructions to disable auto-exposure are usually placed on a line after the pipeline start line. If that is not possible in your project then an alternative approach could be to define a json camera configuration file that contains the line |
In that case, I am assuming |
Hi @FANFANFAN2506 Do you require further assistance with this case, please? Thanks! |
Case closed due to no further comments received. |
Issue Description
Hello,
I installed the Realsense SDK 2.0 described as the Installing the packages part in the following document:
https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md#installing-the-packages.
The whole installation appears successful -- I can run realsense_viewer well with my realsense D455.
However, the command
modinfo uvcvideo | grep "version:"
just gives the output with norealsense
string.Can I ignore this situation or I need to re-install from the source package instead of command? (I notice that The Realsense DKMS kernel drivers package (librealsense2-dkms) supports Ubuntu LTS kernels 6.2.)
Thank you very much.
The text was updated successfully, but these errors were encountered: