Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'pyrealsense2.device' object has no attribute 'first_color_sensor' #10002

Closed
sivashankar28 opened this issue Nov 28, 2021 · 37 comments
Labels

Comments

@sivashankar28
Copy link

Hi, I am trying install librealsense on my NVIDIA AGX but after running through an install and executing my program
I keep getting this error
AttributeError: 'pyrealsense2.device' object has no attribute 'first_color_sensor'

Using: Python 3.6 on a archiconda environment

Does anyone know how to resolve this?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 28, 2021

Hi @sivashankar28 Have you installed the pyrealsense2 Python compatibility 'wrapper', please? The pyrealsense2 wrapper is not included in the librealsense SDK by default and has to be deliberately specified for inclusion. This can be done by including the build term DBUILD_PYTHON_BINDINGS:bool=true when building librealsense from source code with CMake.

An example of instructions building librealsense for Python 3.6 with pyrealsense2 included in the build can be found at #6964 (comment)

The Python path (PYTHONPATH) on Conda may be different to the ones that are normally used with Python though. You should be able to find the correct Python path for Conda with the command which python3

@sivashankar28
Copy link
Author

Okay when I type in import pyrealsense2 in python on my conda environment it seems to be working with no errors.
However I think you are correct about the python paths.
my bashrc file has
export PATH=$PATH:~/.local/bin
export PYTHONPATH=$PYTHONPATH:/usr/local/lib
export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python3.6/pyrealsense2

when I type in which python3
the output is /home/XYZ/archiconda3/envs/XYZ/bin/python3

However, I am not sure how to change the conda python path to "/usr/local/lib" if that is what i should do

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 29, 2021

As the error that you experienced was AttributeError: 'pyrealsense2.device' object has no attribute 'first_color_sensor', this makes me think that the pyrealsense2 installation may be okay and it is a problem in the Python script instead in regard to the definition of the 'first_color_sensor' instruction. If there were a path problem then I would expect to see an error such as No module named 'pyrealsense2'. So let's first eliminate the possibility that there is an error in the code.

Have you defined a profile instruction in your pipe start statement and then called on profile in your first_color_sensor line like in the example of Python code below?

import pyrealsense2 as rs
pipeline = rs.pipeline()
config = rs.config()
profile = pipeline.start(config)
color_sensor = profile.get_device().first_color_sensor()

Could you also tell me which RealSense camera model you are using? If it were a caseless Depth Module Kit board model without an RGB sensor, such as D430, then it could cause a failure to detect a color sensor if a script called on it.

@sivashankar28
Copy link
Author

I know the code itself works as I have used it on my Linux based laptop and I have the information that you described for the pipe instruction.
I am using a D415

@MartyG-RealSense
Copy link
Collaborator

Do you also get an error if you change first_color_sensor to first_depth_sensor to look for the depth sensor instead? This would help to determine whether there is a problem with detecting the color sensor specifically or both depth and color.

@sivashankar28
Copy link
Author

Okay I both
depth_sensor = profile.get_device().first_depth_sensor()
color_sensor = profile.get_device().first_color_sensor()
When I comment out the color_sensor then I will receive an error, no module called cupy
when I comment out the depth_sensor I receive the same attribute error as before

@MartyG-RealSense
Copy link
Collaborator

I am not familiar with CuPy but my research of it indicates that it is related to CUDA graphics acceleration on computers / computing devices with an Nvidia graphics GPU, such as Jetson boards.

What method did you use to install librealsense on your Jetson AGX? librealsense can support CUDA but that support needs to be enabled. If librealsense was installed from packages using the instructions on Intel's installation_jetson.md Jetson installation page then CUDA support should be included in the packages. If librealsense is built from source code with CMake though then the build term DBUILD_WITH_CUDA=true should be included in the CMake build instruction to enable CUDA support.

@sivashankar28
Copy link
Author

It was built from source and the CUDA 10.2 setup when using cmake was correct I think. I think I need cupy 8.2 on my conda environment.
When i comment out the color sensor:
depth_sensor = profile.get_device().first_depth_sensor()
# color_sensor = profile.get_device().first_color_sensor()

This is what prints out but it gets stuck and nothing else happens.
INFO:root:Attempting to enter advanced mode and upload JSON settings file
INFO:root:Found device that supports advanced mode: 'Intel RealSense D415'
INFO:root:Advanced mode is 'enabled'
INFO:root:Pipeline Created

When I uncomment it all, I still receive the same error

@MartyG-RealSense
Copy link
Collaborator

Would it be possible to post your Python script in a comment on this discussion, please? I note though that you mentioned in #10002 (comment) that the script worked correctly on a Linux based laptop, suggesting that the code is fine and it is a Jetson-specific issue.

When the script is run on the Linux laptop then it does not become 'stuck' and produces an output?

@MartyG-RealSense
Copy link
Collaborator

Hi @sivashankar28 Do you require further assistance with this case, please? Thanks!

@sivashankar28
Copy link
Author

Hi @MartyG-RealSense I am still having the same issue, cupy has been installed but I am not sure if its a CuDNN issue. The code works perfectly well with an ouput on the laptop.

@MartyG-RealSense
Copy link
Collaborator

Is the laptop also using Python 3.6 and Archiconda?

@sivashankar28
Copy link
Author

Yes it is

@MartyG-RealSense
Copy link
Collaborator

I looked through the case again from the beginning. As you mention at the start, it does seem to be centered around first_color_sensor. Are you able to confirm whether the color stream is able to be accessed on your Jetson AGX, either by running an application such as the RealSense Viewer or by testing another pyrealsense2 script that makes use of the color stream.

@sivashankar28
Copy link
Author

Actually, the colour stream works on realsense-viewer but it states "Incomplete frame received: Incomplete video frame detected!". However, I switched the realsense camera onto a PC and no such error comes up.

@sivashankar28
Copy link
Author

I have solved the incomplete video frame by reducing the resolution on realsense-viewer but still the same color issue as before

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 6, 2021

It's good to hear that you resolved the incomplete frame problem. As background information to better understand why that particular problem may occur, #927 (comment) is a useful reference.

@sivashankar28
Copy link
Author

Thank you, please let me know if you have any thoughts about the color sensor issue.

@MartyG-RealSense
Copy link
Collaborator

Instead of using first_depth_sensor and first_color_sensor, an alternative way to access a particular sensor is with the pipeline.get_active_profile().get_device().query_sensors()[index number] instruction. The depth sensor can be accessed with an index number of '0' whilst the RGB sensor is accessed with an index of '1'.

#4449 (comment) has an example of Python code for accessing the RGB sensor with the index number [1].

image

So you could take the following approach to defining the sensors.

depth_sensor = pipeline.get_active_profile().get_device().query_sensors()[0]
color_sensor = pipeline.get_active_profile().get_device().query_sensors()[1]

@sivashankar28
Copy link
Author

Okay, I think it could work as I am now getting
"AttributeError: 'pyrealsense2.sensor' object has no attribute 'get_depth_scale'"
Do you know the equivalent for
depth_scale = depth_sensor.get_depth_scale()

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 7, 2021

If your project is only going to be used with 400 Series cameras and not the L515 then you could hard-code the 400 Series cameras' default depth scale value of 0.001 for the depth scale instead of retrieving it from the camera in real-time with depth_sensor.get_depth_scale(). For example:

depth_scale = 0.001

@sivashankar28
Copy link
Author

I will be using the d400 and l515 cameras. for now I changed the depth scale to the above and its stuck after this.

INFO:root:Attempting to enter advanced mode and upload JSON settings file
INFO:root:Found device that supports advanced mode: 'Intel RealSense D415'
INFO:root:Advanced mode is 'enabled'
INFO:root:Pipeline Created

I realized the API on my laptop is different to that the one on the Jetson.
I am using realsense version 2.45 on my laptop (Linux x86)
On the jetson (AGX) the version is 2.50.
In order to update the python APIs do you know what I should do?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 7, 2021

The device management script 'realsense_device_manager.py' in the SDK's box_dimensioner_multicam Python example project demonstrates in the code section highlighted in the link below how a different value can be set depending on whether an L515 or a 400 Series camera is attached.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/box_dimensioner_multicam/realsense_device_manager.py#L160-L169

If you installed librealsense from source code on Jetson - as indicated by #10002 (comment) - then if you wanted to build 2.45.0 from source code on the AGX with the 2.45.0 Python wrapper, I would think that you could download the 2.45.0 source code folder and use it with CMake to build librealsense and the Python wrapper at the same time.

https://github.com/IntelRealSense/librealsense/releases/tag/v2.45.0

image

My understanding is that if you already had built 2.50.0 from source code on your AGX and want to uninstall it before installing 2.45.0 then you would go to the build folder of the 2.50.0 SDK and use the CMake command below to uninstall the SDK and clean the CMake cache.

sudo make uninstall && make clean

@sivashankar28
Copy link
Author

I thought it might be best to upgrade the code instead of downgrading. I will want to switch between using the d400 and l515 camera but not at the same time.

This is my current create pipeline code but not sure what to change to make sure it is compatible with the new API

def create_pipeline(config: dict):
"""Sets up the pipeline to extract depth and rgb frames

Arguments:
    config {dict} -- A dictionary mapping for configuration. see default.yaml

Returns:
    tuple -- pipeline, process modules, filters, t265 device (optional)
"""
# Create pipeline and config for D4XX,L5XX
pipeline = rs.pipeline()
rs_config = rs.config()

# IF t265 is enabled, need to handle seperately
t265_dev = None
t265_sensor = None
t265_pipeline = rs.pipeline()
t265_config = rs.config()

if config['playback']['enabled']:
    # Load recorded bag file
    rs.config.enable_device_from_file(
        rs_config, config['playback']['file'], config['playback'].get('repeat', False))

    # This code is only activated if the user points to a T265 Recorded Bag File
    if config['tracking']['enabled']:
        rs.config.enable_device_from_file(
            t265_config, config['tracking']['playback']['file'], config['playback'].get('repeat', False))

        t265_config.enable_stream(rs.stream.pose)
        t265_pipeline.start(t265_config)
        profile_temp = t265_pipeline.get_active_profile()
        t265_dev = profile_temp.get_device()
        t265_playback = t265_dev.as_playback()
        t265_playback.set_real_time(False)

else:
    # Ensure device is connected
    ctx = rs.context()
    devices = ctx.query_devices()
    if len(devices) == 0:
        logging.error("No connected Intel Realsense Device!")
        sys.exit(1)

    if config['advanced']:
        logging.info("Attempting to enter advanced mode and upload JSON settings file")
        load_setting_file(ctx, devices, config['advanced'])

    if config['tracking']['enabled']:
        # Cycle through connected devices and print them
        for dev in devices:
            dev_name = dev.get_info(rs.camera_info.name)
            print("Found {}".format(dev_name))
            if "Intel RealSense D4" in dev_name:
                pass
            elif "Intel RealSense T265" in dev_name:
                t265_dev = dev
            elif "Intel RealSense L515" in dev_name:
                pass

        if config['tracking']['enabled']:
            if len(devices) != 2:
                logging.error("Need 2 connected Intel Realsense Devices!")
                sys.exit(1)
            if t265_dev is None:
                logging.error("Need Intel Realsense T265 Device!")
                sys.exit(1)

            if t265_dev:
                # Unable to open as a pipeline, must use sensors
                t265_sensor = t265_dev.query_sensors()[0]
                profiles = t265_sensor.get_stream_profiles()
                pose_profile = [profile for profile in profiles if profile.stream_name() == 'Pose'][0]
                t265_sensor.open(pose_profile)
                t265_sensor.start(callback_pose)
                logging.info("Started streaming Pose")

rs_config.enable_stream(
    rs.stream.depth, config['depth']['width'],
    config['depth']['height'],
    rs.format.z16, config['depth']['framerate'])
# other_stream, other_format = rs.stream.infrared, rs.format.y8
rs_config.enable_stream(
    rs.stream.color, config['color']['width'],
    config['color']['height'],
    rs.format.rgb8, config['color']['framerate'])

# Start streaming
pipeline.start(rs_config)
profile = pipeline.get_active_profile()

# depth_sensor = profile.get_device().first_depth_sensor()
# color_sensor = profile.get_device().first_color_sensor()
depth_sensor = pipeline.get_active_profile().get_device().query_sensors()[0]
color_sensor = pipeline.get_active_profile().get_device().query_sensors()[1]

depth_scale = depth_sensor.get_depth_scale()
# depth_sensor.set_option(rs.option.global_time_enabled, 1.0)
# color_sensor.set_option(rs.option.global_time_enabled, 1.0)

if config['playback']['enabled']:
    dev = profile.get_device()
    playback = dev.as_playback()
    playback.set_real_time(False)

# Processing blocks
filters = []
decimate = None
align = rs.align(rs.stream.color)
depth_to_disparity = rs.disparity_transform(True)
disparity_to_depth = rs.disparity_transform(False)
# Decimation
if config.get("filters").get("decimation"):
    filt = config.get("filters").get("decimation")
    if filt.get('active', True):
        filt.pop('active', None)  # Remove active key before passing params
        decimate = rs.decimation_filter(**filt)

# Spatial
if config.get("filters").get("spatial"):
    filt = config.get("filters").get("spatial")
    if filt.get('active', True):
        filt.pop('active', None)  # Remove active key before passing params
        my_filter = rs.spatial_filter(**filt)
        filters.append(my_filter)

# Temporal
if config.get("filters").get("temporal"):
    filt = config.get("filters").get("temporal")
    if filt.get('active', True):
        filt.pop('active', None)  # Remove active key before passing params
        my_filter = rs.temporal_filter(**filt)
        filters.append(my_filter)

process_modules = (align, depth_to_disparity, disparity_to_depth, decimate)

intrinsics = get_intrinsics(pipeline, rs.stream.color)
proj_mat = create_projection_matrix(intrinsics)

sensor_meta = dict(depth_scale=depth_scale)
config['sensor_meta'] = sensor_meta

# Note that sensor must be saved so that it is not garbage collected
t265_device = dict(pipeline=t265_pipeline, sensor=t265_sensor)

return pipeline, process_modules, filters, proj_mat, t265_device

@MartyG-RealSense
Copy link
Collaborator

Python code that works on SDK 2.45.0 should also work on 2.50.0 without having to change it.

I am not involved in support for the T265 model so cannot offer advice on it. Looking at your script though, you seem to have appropriately sectioned the T265 code off with the 'tracking' condition so that the script only accesses it if 'tracking' is true and a T265 device name is detected.

@sivashankar28
Copy link
Author

Okay so it looks like we are back to square one
When using
depth_sensor = pipeline.get_active_profile().get_device().query_sensors()[0]
color_sensor = pipeline.get_active_profile().get_device().query_sensors()[1]
Output:
AttributeError: 'pyrealsense2.sensor' object has no attribute 'get_depth_scale'

When using:
depth_sensor = profile.get_device().first_depth_sensor()
color_sensor = profile.get_device().first_color_sensor()
Output:
AttributeError: 'pyrealsense2.device' object has no attribute 'first_color_sensor'

@MartyG-RealSense
Copy link
Collaborator

I will take another look at the case tomorrow to see whether I can find any fresh insights. Thanks very much for your continued patience!

@MartyG-RealSense
Copy link
Collaborator

When using pipeline.get_active_profile().get_device().query_sensors() in the above script, did you try setting depth_scale to a numeric value of 0.001 as suggested in #10002 (comment) instead of retrieving the value from the camera in real-time with depth_sensor.get_depth_scale()

@sivashankar28
Copy link
Author

Yes I tried that but it get stuck after this output:
INFO:root:Attempting to enter advanced mode and upload JSON settings file
INFO:root:Found device that supports advanced mode: 'Intel RealSense D415'
INFO:root:Advanced mode is 'enabled'
INFO:root:Pipeline Created

@MartyG-RealSense
Copy link
Collaborator

I cannot see code in your script that checks first whether Advanced Mode is supported before attempting to access an Advanced Mode function. If such a check is not included before using an Advanced Mode function such as loading a json then the risk of an error occurring increases.

The RealSense SDK provides a Python example program for Advanced Mode called python-rs400-advanced-mode-example.py that checks for Advanced Mode support.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/python-rs400-advanced-mode-example.py

@sivashankar28
Copy link
Author

sivashankar28 commented Dec 8, 2021

I believe it is on the script above :
else:
# Ensure device is connected
ctx = rs.context()
devices = ctx.query_devices()
if len(devices) == 0:
logging.error("No connected Intel Realsense Device!")
sys.exit(1)

    if config['advanced']:
        logging.info("Attempting to enter advanced mode and upload JSON settings file")
        load_setting_file(ctx, devices, config['advanced'])

Otherwise this code wouldn't work on my machine

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 8, 2021

The above code seems to be checking whether there is a RealSense device attached and therefore included in the ctx device list. If the list length =0 because there is no device attached then log 'No connected Intel Realsense Device'.

It is not clear to me how the code in the if config['advanced']: section highlighted above checks how Advanced Mode is enabled or not. A camera's pipeline can still start if Advanced Mode is disabled. So I interpret the above code as - if 'advanced' = true then load the json file.

@sivashankar28
Copy link
Author

sivashankar28 commented Dec 10, 2021

This is the full code:
https://github.com/sivashankar28/polylidar-realsense-wheelchair/blob/jeremy/surfacedetector/curblinedetection.py

I am not sure if the color sensor is being retrieved correctly.

AttributeError: 'pyrealsense2.sensor' object has no attribute 'get_depth_scale

@sivashankar28
Copy link
Author

I think I figured out the problem but not sure of the correct solution.
I think it has to do with the bashrc filepaths. I am using archiconda environment to run everything, so currently bashrc has:

export PATH=$PATH:~/.local/bin
export PYTHONPATH=$PYTHONPATH:/usr/local/lib
export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python3.6/pyrealsense2
Therefore, these file paths need be directed to associated files on archiconda folder? What do each of these export lines correspond to? That way i can reroute the paths to the correct location.

@MartyG-RealSense
Copy link
Collaborator

As you mentioned earlier in this case, using the command which python3 provides you with this path:

/home/XYZ/archiconda3/envs/XYZ/bin/python3

This path can be used in the CMake build instruction with the build flag -DPYTHON_EXECUTABLE to point the build of pyrealsense2 to the Python 3 version that the computer is using. For example:

-DPYTHON_EXECUTABLE=/home/XYZ/archiconda3/envs/XYZ/bin/python3

Alternatively, 'which python3' can be incorporated into the DPYTHON_EXECUTABLE command to retrieve the path automatically.

-DPYTHON_EXECUTABLE=$(which python3)

For the PYTHONPATH in the bashrc file, the Python wrapper documentation suggests using:

export PYTHONPATH=$PYTHONPATH:/usr/local/lib

You could therefore try removing this line from the bashrc file:

export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python3.6/pyrealsense2

And then finally 'sourcing' the bashrc file with the command source ~/.bashrc

#2496 (comment) is an example where this configuration of PATH and PYTHONPATH worked successfully. That discussion also suggests using the command echo $PYTHONPATH to check which path to use for PYTHONPATH on your particular computer.


In regard to the meanings of the paths: my understanding is that /.local/bin is a hidden folder (as indicated by the '.' in front of a folder name) that is equivalent to the visible /usr/local/bin folder where executable binaries are placed when programs are compiled into builds.

PYTHONPATH, meanwhile, indicates the path to look on the computer for Python modules.

@MartyG-RealSense
Copy link
Collaborator

Hi @sivashankar28 Do you require further assistance with this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants