-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding place_recognition to pcl #188
Conversation
According to your suggestion, I have seperated them from the GUI and revised them following other algorithms. After pull request, it taked a long time to build. Unfortunately, there is an overtime error from g++ compiler. How should I solve it? |
I'm still not convinced this really belongs into PCL, let's have a look at it one by one.
If you really want parts of this in PCL, please point out which algorithms are relevant. |
bearing_angle.h: Q 1: In which way is this different to pcl/range_image/range_image.h? [1] B. Steder, R. Rusu, K. Konolige, and W. Burgard. Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries. In Proc. of the IEEE International Conference on Robotics and Automation, pp. 2601-2608, Shanghai, China, May 2011. Q 2: Could we incorporate this into the rang_image.h classes? scene_cognition.cpp: |
Could you model the bearing_angle.h similar to the range_image.h?
I would vote for keeping this separate from PCL. Cheers Jochen |
Ok. I will do it according to your suggestion.
Ok. I agree with your comment. Cheers |
superseded by #192. |
Active environment perception and autonomous place recognition play a key role for mobile robots to operate within a cluttered indoor environment with dynamic changes.
place_recognition provides a novel 3D-laser-based indoor place recognition method to deal with the random disturbances caused by unexpected movements of people and other objects.
The proposed approach can extract and match the Speeded-Up Robust Features (SURFs) from bearing-angle images generated by a 3D laser scanner. It can cope with the irregular disturbance of moving objects and the problem of observing-location changes of the laser scanner. Both global metric information and local SURF features are extracted from 3D laser point clouds and 2D bearing-angle images, respectively.
A large-scale indoor environment with over 1600 m2 and 30 offices is selected as a testing site, and a mobile robot, i.e., SmartROB2, is deployed for conducting experiments. Experimental results show that the proposed 3-D-laser-based scene measurement technique and place recognition approach are effective and provide robust performance of place recognition in a dynamic indoor environment.