-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AttributeError: 'pyrealsense2.device' object has no attribute 'first_color_sensor' #10002
Comments
Hi @sivashankar28 Have you installed the pyrealsense2 Python compatibility 'wrapper', please? The pyrealsense2 wrapper is not included in the librealsense SDK by default and has to be deliberately specified for inclusion. This can be done by including the build term DBUILD_PYTHON_BINDINGS:bool=true when building librealsense from source code with CMake. An example of instructions building librealsense for Python 3.6 with pyrealsense2 included in the build can be found at #6964 (comment) The Python path (PYTHONPATH) on Conda may be different to the ones that are normally used with Python though. You should be able to find the correct Python path for Conda with the command which python3 |
Okay when I type in import pyrealsense2 in python on my conda environment it seems to be working with no errors. when I type in which python3 However, I am not sure how to change the conda python path to "/usr/local/lib" if that is what i should do |
As the error that you experienced was AttributeError: 'pyrealsense2.device' object has no attribute 'first_color_sensor', this makes me think that the pyrealsense2 installation may be okay and it is a problem in the Python script instead in regard to the definition of the 'first_color_sensor' instruction. If there were a path problem then I would expect to see an error such as No module named 'pyrealsense2'. So let's first eliminate the possibility that there is an error in the code. Have you defined a profile instruction in your pipe start statement and then called on profile in your first_color_sensor line like in the example of Python code below?
Could you also tell me which RealSense camera model you are using? If it were a caseless Depth Module Kit board model without an RGB sensor, such as D430, then it could cause a failure to detect a color sensor if a script called on it. |
I know the code itself works as I have used it on my Linux based laptop and I have the information that you described for the pipe instruction. |
Do you also get an error if you change first_color_sensor to first_depth_sensor to look for the depth sensor instead? This would help to determine whether there is a problem with detecting the color sensor specifically or both depth and color. |
Okay I both |
I am not familiar with CuPy but my research of it indicates that it is related to CUDA graphics acceleration on computers / computing devices with an Nvidia graphics GPU, such as Jetson boards. What method did you use to install librealsense on your Jetson AGX? librealsense can support CUDA but that support needs to be enabled. If librealsense was installed from packages using the instructions on Intel's installation_jetson.md Jetson installation page then CUDA support should be included in the packages. If librealsense is built from source code with CMake though then the build term DBUILD_WITH_CUDA=true should be included in the CMake build instruction to enable CUDA support. |
It was built from source and the CUDA 10.2 setup when using cmake was correct I think. I think I need cupy 8.2 on my conda environment. This is what prints out but it gets stuck and nothing else happens. When I uncomment it all, I still receive the same error |
Would it be possible to post your Python script in a comment on this discussion, please? I note though that you mentioned in #10002 (comment) that the script worked correctly on a Linux based laptop, suggesting that the code is fine and it is a Jetson-specific issue. When the script is run on the Linux laptop then it does not become 'stuck' and produces an output? |
Hi @sivashankar28 Do you require further assistance with this case, please? Thanks! |
Hi @MartyG-RealSense I am still having the same issue, cupy has been installed but I am not sure if its a CuDNN issue. The code works perfectly well with an ouput on the laptop. |
Is the laptop also using Python 3.6 and Archiconda? |
Yes it is |
I looked through the case again from the beginning. As you mention at the start, it does seem to be centered around first_color_sensor. Are you able to confirm whether the color stream is able to be accessed on your Jetson AGX, either by running an application such as the RealSense Viewer or by testing another pyrealsense2 script that makes use of the color stream. |
Actually, the colour stream works on realsense-viewer but it states "Incomplete frame received: Incomplete video frame detected!". However, I switched the realsense camera onto a PC and no such error comes up. |
I have solved the incomplete video frame by reducing the resolution on realsense-viewer but still the same color issue as before |
It's good to hear that you resolved the incomplete frame problem. As background information to better understand why that particular problem may occur, #927 (comment) is a useful reference. |
Thank you, please let me know if you have any thoughts about the color sensor issue. |
Instead of using first_depth_sensor and first_color_sensor, an alternative way to access a particular sensor is with the pipeline.get_active_profile().get_device().query_sensors()[index number] instruction. The depth sensor can be accessed with an index number of '0' whilst the RGB sensor is accessed with an index of '1'. #4449 (comment) has an example of Python code for accessing the RGB sensor with the index number [1]. So you could take the following approach to defining the sensors.
|
Okay, I think it could work as I am now getting |
If your project is only going to be used with 400 Series cameras and not the L515 then you could hard-code the 400 Series cameras' default depth scale value of 0.001 for the depth scale instead of retrieving it from the camera in real-time with depth_sensor.get_depth_scale(). For example:
|
I will be using the d400 and l515 cameras. for now I changed the depth scale to the above and its stuck after this. INFO:root:Attempting to enter advanced mode and upload JSON settings file I realized the API on my laptop is different to that the one on the Jetson. |
The device management script 'realsense_device_manager.py' in the SDK's box_dimensioner_multicam Python example project demonstrates in the code section highlighted in the link below how a different value can be set depending on whether an L515 or a 400 Series camera is attached. If you installed librealsense from source code on Jetson - as indicated by #10002 (comment) - then if you wanted to build 2.45.0 from source code on the AGX with the 2.45.0 Python wrapper, I would think that you could download the 2.45.0 source code folder and use it with CMake to build librealsense and the Python wrapper at the same time. https://github.com/IntelRealSense/librealsense/releases/tag/v2.45.0 My understanding is that if you already had built 2.50.0 from source code on your AGX and want to uninstall it before installing 2.45.0 then you would go to the build folder of the 2.50.0 SDK and use the CMake command below to uninstall the SDK and clean the CMake cache.
|
I thought it might be best to upgrade the code instead of downgrading. I will want to switch between using the d400 and l515 camera but not at the same time. This is my current create pipeline code but not sure what to change to make sure it is compatible with the new API def create_pipeline(config: dict):
|
Python code that works on SDK 2.45.0 should also work on 2.50.0 without having to change it. I am not involved in support for the T265 model so cannot offer advice on it. Looking at your script though, you seem to have appropriately sectioned the T265 code off with the 'tracking' condition so that the script only accesses it if 'tracking' is true and a T265 device name is detected. |
Okay so it looks like we are back to square one When using: |
I will take another look at the case tomorrow to see whether I can find any fresh insights. Thanks very much for your continued patience! |
When using pipeline.get_active_profile().get_device().query_sensors() in the above script, did you try setting depth_scale to a numeric value of 0.001 as suggested in #10002 (comment) instead of retrieving the value from the camera in real-time with depth_sensor.get_depth_scale() |
Yes I tried that but it get stuck after this output: |
I cannot see code in your script that checks first whether Advanced Mode is supported before attempting to access an Advanced Mode function. If such a check is not included before using an Advanced Mode function such as loading a json then the risk of an error occurring increases. The RealSense SDK provides a Python example program for Advanced Mode called python-rs400-advanced-mode-example.py that checks for Advanced Mode support. |
I believe it is on the script above :
Otherwise this code wouldn't work on my machine |
The above code seems to be checking whether there is a RealSense device attached and therefore included in the ctx device list. If the list length =0 because there is no device attached then log 'No connected Intel Realsense Device'. It is not clear to me how the code in the if config['advanced']: section highlighted above checks how Advanced Mode is enabled or not. A camera's pipeline can still start if Advanced Mode is disabled. So I interpret the above code as - if 'advanced' = true then load the json file. |
This is the full code: I am not sure if the color sensor is being retrieved correctly. AttributeError: 'pyrealsense2.sensor' object has no attribute 'get_depth_scale |
I think I figured out the problem but not sure of the correct solution. export PATH=$PATH:~/.local/bin |
As you mentioned earlier in this case, using the command which python3 provides you with this path: /home/XYZ/archiconda3/envs/XYZ/bin/python3 This path can be used in the CMake build instruction with the build flag -DPYTHON_EXECUTABLE to point the build of pyrealsense2 to the Python 3 version that the computer is using. For example: -DPYTHON_EXECUTABLE=/home/XYZ/archiconda3/envs/XYZ/bin/python3 Alternatively, 'which python3' can be incorporated into the DPYTHON_EXECUTABLE command to retrieve the path automatically. -DPYTHON_EXECUTABLE=$(which python3) For the PYTHONPATH in the bashrc file, the Python wrapper documentation suggests using: export PYTHONPATH=$PYTHONPATH:/usr/local/lib You could therefore try removing this line from the bashrc file: export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python3.6/pyrealsense2 And then finally 'sourcing' the bashrc file with the command source ~/.bashrc #2496 (comment) is an example where this configuration of PATH and PYTHONPATH worked successfully. That discussion also suggests using the command echo $PYTHONPATH to check which path to use for PYTHONPATH on your particular computer. In regard to the meanings of the paths: my understanding is that /.local/bin is a hidden folder (as indicated by the '.' in front of a folder name) that is equivalent to the visible /usr/local/bin folder where executable binaries are placed when programs are compiled into builds. PYTHONPATH, meanwhile, indicates the path to look on the computer for Python modules. |
Hi @sivashankar28 Do you require further assistance with this case, please? Thanks! |
Case closed due to no further comments received. |
Hi, I am trying install librealsense on my NVIDIA AGX but after running through an install and executing my program
I keep getting this error
AttributeError: 'pyrealsense2.device' object has no attribute 'first_color_sensor'
Using: Python 3.6 on a archiconda environment
Does anyone know how to resolve this?
The text was updated successfully, but these errors were encountered: