-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Low Refine Correct Rate #20
Comments
Thanks for your question and apologize for my late response.
We speculate that this is because the coarse-level features are more representative than the fine-level features. For the coarse-level features, the self-attention module encodes the spatial and geometric information for each superpixel and superpoint, while the cross-attention module injects geometric structure and texture information across the image and point cloud, respectively. However, this rich information is not effectively propagated to the fine-level features. We plan to concatenate or add the powerful coarse-level features to the fine-level features to boost the performance. Please be patient. |
The coarse-to-fine pipeline proves effective for I2P registration, as shown in Table 3. Multi-scale supervision guides the model in constructing robust and accurate correspondences at different resolutions. |
In Table 3, the coarse-level only matching achieves 1.35° + 0.34m, so what is the difference between the Table 3 experiment and this: " We have used the coarse-level correspondences for registration and found that the registration accuracy is higher compared to using the fine-level ones. Thanks for your reply. |
During training, the |
I add the following lines to evaluate the refine correct rate of the fine module:
The mean refine_correct rate for over 5583 samples on the KITTI test set is 0.3325, which is pretty low in my understanding.
Could you provide some hints about it?
The text was updated successfully, but these errors were encountered: