-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproducing Results on the https://www.visuallocalization.net/details/17831/ Benchmark #20
Comments
$ python -m pixloc.run_Cambridge Std Output - ( Only for KingsCollege scene ) [11/26/2021 18:46:06 pixloc INFO] Parsed configuration:
experiment: pixloc_megadepth
features: {}
optimizer:
num_iters: 100
pad: 2
refinement:
num_dbs: 5
multiscale:
- 4
- 1
point_selection: all
normalize_descriptors: true
average_observations: true
filter_covisibility: false
do_pose_approximation: false
[11/26/2021 18:46:06 pixloc INFO] Working on scene KingsCollege.
[11/26/2021 18:46:06 pixloc.localization.model3d INFO] Reading COLMAP model /home/ajay/pixloc/outputs/hloc/Cambridge/KingsCollege/sfm_superpoint+superglue.
[11/26/2021 18:46:08 pixloc.utils.io INFO] Imported 343 images from query_list_with_intrinsics.txt
[11/26/2021 18:46:08 pixloc.pixlib.utils.experiments INFO] Loading checkpoint checkpoint_best.tar
[11/26/2021 18:46:11 pixloc.localization.localizer INFO] Starting the localization process...
47%|██████████████████████████████████████████▉ | 162/343 [12:05<11:33, 3.83s/it][11/26/2021 18:58:18 pixloc.localization.base_refiner INFO] Optimization failed for query seq2/frame00033.png
100%|███████████████████████████████████████████████████████████████████████████████████████████| 343/343 [23:46<00:00, 4.16s/it]
[11/26/2021 19:09:57 pixloc.utils.io INFO] Writing the localization results to /home/ajay/pixloc/outputs/results/pixloc_Cambridge_KingsCollege.txt.
[11/26/2021 19:09:57 pixloc INFO] Evaluate scene KingsCollege: /home/ajay/pixloc/outputs/results/pixloc_Cambridge_KingsCollege.txt
[11/26/2021 19:09:57 pixloc.utils.eval INFO]
Median errors: 0.128m, 0.228deg
Percentage of test images localized within:
1cm, 1deg : 0.29%
2cm, 2deg : 1.46%
3cm, 3deg : 4.37%
5cm, 5deg : 14.29%
25cm, 2deg : 73.47%
50cm, 5deg : 87.76%
500cm, 10deg : 96.21% Here the med errors are 12.8cm/0.228deg which are different from what is claimed in paper ( 14cm/0.24deg, 13cm/0.24deg with oracle prior ). Please review this and help us to understand reason for this deviation. Similarly, 7Scenes results are also different than claimed. |
@fulkast: This is indeed abnormal, let me run this again and dig into it. Did you also run the evaluation without
Which is slightly higher than what is reported in the original paper. @patelajaychh Please don't hijack this issue, instead open a new one to discuss the results of Cambridge/7Scenes. The README explicitly mentions that the results might differ slightly, and generally improve rather than get worse. Given the rounding, the results that you obtain are rather consistent. Does it consistently get worse on other scenes? What is the difference observed with 7Scenes? |
Hi, @skydes. Yes, I've also run the evaluation without the flag
Does this slice list look correct for the evaluation on the Extended CMU dataset: |
On my end running
|
Thanks for checking this! I will re-run the experiment and report back. |
Hi @skydes, Following your suggestion, I've re-run the evaluation on the urban category and got the results:
For more context, I'm running on commit |
What is the |
Hi! Sorry, I should have elaborated on the last post. To double-check, here's the sha1sum of the That said, I will share the loss curves from my training experiment (the one called Please let me know in case anything looks suspicious. [Edit]: If possible, may I ask what metrics you get when you run Best, |
hello,@fulkast caochengyang@caochengyang-Lenovo-Legion-R9000P2021H:~/pixloc$ python3 -m pixloc.run_Aachen --from_poses 0 Have you encountered it? Can you help me solve it? grateful!!! |
@caochengyang0828 I will respond to the new issue you started. |
@fulkast This all looks good, my setup seems to match on every point, so I really have no idea of what's happening here. Here is what I get for slice 2 only: |
Thanks, @skydes. Yeah, I get |
Again, this is obtained by running the following on a fresh clone of the repo:
|
Hi @skydes, |
Looking forward to the final solution to this problem that has been bothering me for a long time. T.T Thank you very much @fulkast |
hello,@fulkast,@skydes. pip3 install -r requirements.txt Is this the reason for relying on the library version? Thank you so much for answering this question, it has bothered me for a long time! |
Happy holidays all! I am away from work now and won't be making physical progress on this issue. In the meantime, however, I did come across this pytorch issue that looks like it might be related pytorch/pytorch#70162. I will report more on this in the new year. |
Hi all! A happy new year to you all and I've got potentially some good news :) In implementing some local tests I noticed that the results of my tensor matmul operations can be significantly different, depending on whether they were run on the GPU or the CPU. Long story short, this led me to this note here: After setting
at the top of |
Hi @skydes I'm happy to share that I've been able to reproduce your results on slice 2 and I'm hence confident that I should be able to reproduce the results for the rest of the dataset as well. It came down to the pytorch issue mentioned in this link. Essentially, I was already getting positional Euclidean errors in transforming 3-D points to the camera frame on the order of 10cm because I by default Thank you for your feedback on my questions; your data was helpful in letting me hone in on the source of the problem! Best, |
Hi @skydes
Thank you for sharing your implementation and the tools surrounding this localization framework!
I have been trying to reproduce the results of hloc + pixloc on the visual localization benchmark for the Extended CMU dataset. However, I haven't been able to get results close to the values seen on the linked benchmark. The values I'm currently getting are:
![Untitled presentation (2)](https://user-images.githubusercontent.com/9142922/143388531-04416a65-1c4e-49bc-a1cc-1adb2a1de321.png)
Locally, I've downloaded the pixloc_cmu pre-trained weights hosted here, and I'm running the following command:
python -m pixloc.run_CMU --from_poses
Which after hours of running terminates with the message (truncated):
[11/25/2021 00:32:21 pixloc INFO] Finished evaluating all slices, you can now submit the file /home/frank/github/pixloc/outputs/results/pixloc_CMU_slice2-3-4-5-6-13-14-15-16-17-18-19-20-21.txt to https://www.visuallocalization.net/submission/
I'm assuming that
--from_poses
runs the evaluation using hloc poses as a start, is this correct? Also, do you have any pointers on what I must be doing wrong?The text was updated successfully, but these errors were encountered: