Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Low Refine Correct Rate #20

Open
qsisi opened this issue Nov 11, 2024 · 4 comments
Open

Low Refine Correct Rate #20

qsisi opened this issue Nov 11, 2024 · 4 comments

Comments

@qsisi
Copy link

qsisi commented Nov 11, 2024

I add the following lines to evaluate the refine correct rate of the fine module:

def lidar2uvs(xyz, lidar2cam, K):
  cam_pts = (lidar2cam[:3, :3] @ xyz.T + lidar2cam[:3, 3:]).T
  cam_pts = cam_pts / cam_pts[:, 2:]
  uvs = (K @ cam_pts.T).T
  uvs = uvs[:, :2]
  return uvs
gt_uvs = lidar2uvs(coarse_pc_points.cpu().numpy(), P, K.cpu().numpy())
coarse_uvs = fine_center_xy.T.cpu().numpy()
refine_uvs = fine_xy.T.cpu().numpy()
refine_correct = (np.linalg.norm(gt_uvs - refine_uvs, axis=-1) < np.linalg.norm(gt_uvs - coarse_uvs, axis=-1))

The mean refine_correct rate for over 5583 samples on the KITTI test set is 0.3325, which is pretty low in my understanding.

Could you provide some hints about it?

@martin-liao
Copy link
Collaborator

Thanks for your question and apologize for my late response.
As mentioned in the issue, the coarse correspondences are more accurate than the fine ones. We have used the coarse-level correspondences for registration and found that the registration accuracy is higher compared to using the fine-level ones.

RRE=1.09°, RRE=0.27m

We speculate that this is because the coarse-level features are more representative than the fine-level features. For the coarse-level features, the self-attention module encodes the spatial and geometric information for each superpixel and superpoint, while the cross-attention module injects geometric structure and texture information across the image and point cloud, respectively. However, this rich information is not effectively propagated to the fine-level features.

We plan to concatenate or add the powerful coarse-level features to the fine-level features to boost the performance. Please be patient.

@martin-liao
Copy link
Collaborator

The coarse-to-fine pipeline proves effective for I2P registration, as shown in Table 3. Multi-scale supervision guides the model in constructing robust and accurate correspondences at different resolutions.

@qsisi
Copy link
Author

qsisi commented Dec 3, 2024

Thanks for your question and apologize for my late response. As mentioned in the issue, the coarse correspondences are more accurate than the fine ones. We have used the coarse-level correspondences for registration and found that the registration accuracy is higher compared to using the fine-level ones.

RRE=1.09°, RRE=0.27m

We speculate that this is because the coarse-level features are more representative than the fine-level features. For the coarse-level features, the self-attention module encodes the spatial and geometric information for each superpixel and superpoint, while the cross-attention module injects geometric structure and texture information across the image and point cloud, respectively. However, this rich information is not effectively propagated to the fine-level features.

We plan to concatenate or add the powerful coarse-level features to the fine-level features to boost the performance. Please be patient.

In Table 3, the coarse-level only matching achieves 1.35° + 0.34m, so what is the difference between the Table 3 experiment and this:

" We have used the coarse-level correspondences for registration and found that the registration accuracy is higher compared to using the fine-level ones.
RRE=1.09°, RRE=0.27m
" ?

Thanks for your reply.

@martin-liao
Copy link
Collaborator

During training, the coarse level in Tab. 3 is only supervised by the coarse-level descriptor/detector losses, without fine-level loss.
In contrast, we use the full loss for supervision throughout the training process here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants