-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with Saving Depth Information in PNG Frames and Reading Unexpected Values #12564
Comments
Hi @ronbitonn When saving depth data to PNG files, most of the depth information is lost. There is no way to avoid this when saving to PNG, unfortunately. If possible it is better to save to the .raw image format which preserves depth information. The best RealSense reference about reading depth data from a .raw file (which follows the UVC specification for the depth stream) is at #2231 (comment) |
Hi @MartyG-RealSense , I hope this message finds you well. I've been facing an issue with capturing RGB and depth information using the Intel D435 camera and saving them s PNG files. When attempting to read the depth information from these PNG files, I encounter unexpected values that represent RGB values instead of the actual depth values. After reviewing your guidance on the limitations of PNG for preserving depth information, I'm seeking recommendations on how to efficiently save both RGB and depth data for subsequent use in object detection algorithms. My goal is to save the PNG files for RGB frames and, for each frame, save a corresponding matrix of depth information in the size of the RGB image. Ultimately, I aim to create a folder with X*60 frames, along with an equal number of text files containing depth information. This structure will enable me to correlate depth information with RGB data, facilitating the development of algorithms for object detection. Could you kindly guide the best approach for achieving this goal while ensuring that depth information is accurately preserved for subsequent analysis? Thank you in advance for your assistance. |
#4934 (comment) has a Python script for saving RGB as a PNG image and depth as an array of scaled matrices saved as an .npy file. |
Hi @ronbitonn Do you require further assistance with this case, please? Thanks! |
Case closed due to no further comments received. |
Subject: Performance Issue in Bag File Conversion to JPEG and Depth Arrays with Jetson Orin and RealSense D435 Camera Issue: While using the provided scripts with a Jetson Orin and a RealSense D435 camera, there is a noticeable performance concern during the conversion of rosbag files to JPEG images and depth arrays. The current implementation takes approximately 30 seconds for the conversion of one minute of recorded data, which is suboptimal for real-time or large-scale applications. Platform Details: Jetson Orin The observed bottleneck lies in the code responsible for processing and converting the frames from the rosbag files to JPEG images and depth arrays. This performance concern might impact the scalability and real-time usability of the solution on the Jetson Orin platform. Codespython3 record_multiple_bagfiles_d435.py import os import rosbag NUM_RECORDINGS = 20 ctx = rs.context() if len(devices) >= 1:
Second code#!/usr/bin/env python3 import os bag_directory = "/data/records/records_ivri_farm/new_31_01_2024" for bag_file in os.listdir(bag_directory):
|
Hi @ronbitonn If you increase the frame queue size then the pipeline will be able to contain more frames simultaneously (the default maximum pipeline capacity is 16 frames for each stream type), though this increases the amount of computer memory that is consumed. Python code for setting the frame queue size can be found at #6448 (comment) |
I appreciate your suggestion on increasing the frame queue size. Currently, due to storage limitations, I record as a rosbag file first and then convert to JPEGs and depth arrays separately. Any recommendations for a more efficient approach? Additionally, if there's an option to reduce the number of frames and depth arrays saved per second in the rosbag file, it would be beneficial for storage optimization on my Jetson Orin. Thanks for your advice. |
Rosbags are the most efficient method of writing multiple frames of RealSense data. A way to set a custom frame rate is to only use every 'nth' frame (e.g every 5th frame out of 30 when using 30 FPS to simulate 6 FPS). A Python script that demonstrates this technique is at #3169 |
ok @MartyG-RealSense thank you very much I will check it out and update soon. Appreciate your help a lot. |
Problem Description:
I am currently facing an issue with my code for capturing RGB and depth information using an Intel D435 camera. While my code successfully saves frames as PNG files, I encounter unexpected values when attempting to read the depth information. The depth array appears to represent RGB values instead of actual depth values.
Initial Code for Capturing RGB and Depth Frames:
import pyrealsense2 as rs
import cv2
import numpy as np
import time
import datetime
import os
###Function to visualize the depth image as a colored image based on distance values.
def visualize_depth_as_color(depth_image):
depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
return depth_colormap
###Constants for recording
NUM_RECORDINGS = 480
BREAK_TIME = 2
OUTPUT_DIRECTORY = "/data/records/-----"
width_image = 1280
height_image = 720
video_length = 60
###Initialize context for RealSense and query for devices
ctx = rs.context()
devices = ctx.query_devices()
###Check if there's at least one device connected
if len(devices) >= 1:
#Use the first detected device
first_device = devices[0]
#Get the serial number of the device for identification
first_device_serial_number = first_device.get_info(rs.camera_info.serial_number)
###Close all OpenCV windows after recordings are done
cv2.destroyAllWindows()
Original Code for Reading Depth Information:
For context, here is the code for reading depth information from saved frames
import cv2
import os
import numpy as np
DEPTH_DIRECTORY = list_false_visit[1]
depth_files = [file for file in sorted(os.listdir(DEPTH_DIRECTORY))]
for depth_file in depth_files[:3]:
depth_image_path = os.path.join(DEPTH_DIRECTORY, depth_file)
depth_image = cv2.imread(depth_image_path, cv2.IMREAD_UNCHANGED)
Output:
mathematica
Copy code
Depth information for the first 10 pixels in frame_depth_0725_20231003_143459_016758.png:
Pixel (0, 0): Depth = [128 0 0] mm
Pixel (1, 0): Depth = [128 0 0] mm
Pixel (2, 0): Depth = [128 0 0] mm
...
Desired Outcome:
I expect to retrieve the actual depth values for each pixel instead of an array of RGB values.
Recommendations Sought:
I am considering two alternatives:
I would appreciate guidance or recommendations on the best approach for achieving my goal while managing file size efficiently.
Please review the issue and assist in resolving the problem with reading unexpected values from the depth frames.
The text was updated successfully, but these errors were encountered: