-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Different results on Evaluation with the same pretrained model #25
Comments
Some details are as follows: 7Sceneswith author's model
pixloc_7scenes_chess.txt |
Cambridge
|
There is another possibility, the Just a recommendation. For one check way, may you type Sorry to bother you. Thank you! |
7ScenesIt turns out that the default path pointed to the raw SuperPoint+SuperGlue SfM model, while the results reported in the paper are based on the dense depth maps to cleanup the point cloud. This has now been fixed by 0072dc7. Here are the results that you should get:
CambridgeLet me run these numbers again. CMULet's track this issue in #20 AachenLet's keep this in #23. I feel that this is a similar setup issue as with the CMU dataset. |
These are the results that I obtain for Cambridge Landmarks:
Again there seems to be a discrepancy between your setup and mine. All experiments were conducted with an RTX 2080 Ti and torch==1.10.0+cu102. Output of |
Hi @skydes , thanks for your solution. I changed
As for Cambridge datasets, I notice that in your last block saids "Pixloc_release". It means the |
The evaluation is ran with the |
Dear Author,
Sorry to bother you.
I have download the dataset and checkpoints with command
python -m pixloc.download --select [dataset name] checkpoints
. Then, I try to evaluate the datasets withpython -m pixloc.run_[7Scenes|Cambridge|Aachen|CMU|RobotCar]
(without--from pose
) and got the results. However, it shows different results from the paper. The results are as followed:7_scene
Pixloc_author means use the original
checkpoint_best.tar
, and Pixloc_reproduce uses the reproduced model (running for 18 epoch by myself).It is interesting to note, for Pixloc_author and Pixloc_reproduce, the results are different from the paper, but similar to each others.
Cambridge
Also different from the original results.
Aachen
Note that, the results are similar to the issue #24
CMU
Original:
See, the results also are similar to issue #20 .
My Torch version == 1.7.1 and Numpy == 1.21.2. It's worth to note that when using the model trained by myself or author's model , it will got similar results (see 7Scenes) but different from the paper listed results. I'm also eager to know what went wrong, and if you could help me, I'd appreciate it.
By the way, I will use reproduced model to test Cambridge, Aachen and CMU. To find if it's still the same as the best model that you provide, but different from the paper's results.
The text was updated successfully, but these errors were encountered: