You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
thanks for your nice paper! it is quite a good idea to use semantic loss and hyper-graph for metric learning.
when I tried to reproduce the performance on three datasets, I faced some issues, for the dataset Cars196 the performance is fine, but for CUB-200-2011 and SOP dataset, there is a margin difference between reproduced and reported results.
Below are some results I reproduced,
For Cars196, it works fine, the reproduced performance is consistent with the paper.
Cars196(ResNet50)
R@1
R@2
R@4
HIST (raw-paper)
89.6 $\pm$ 0.2
93.9 $\pm$ 0.1
96.4 $\pm$ 0.1
HIST (reproduced)
89.8
94.2
96.5
But for CUB-200-2011, compared to results reported in the paper, the margins on R@1, R@2 R@4 are, -1.2%, -0.9%, -0.5%:
CUB-200-2011 (ResNet50)
R@1
R@2
R@4
HIST (raw-paper)
71.4 $\pm$ 0.2
81.1 $\pm$ 0.3
88.1 $\pm$ 0.2
HIST (reproduced)
70.0
79.7
87.4
And for SOP, compared to results reported in the paper, the margins on R@1, R@10, R@100 are, -0.5%, 0.0%, -0.1%
SOP (ResNet50)
R@1
R@10
R@100
HIST (raw-paper)
81.4 $\pm$ 0.2
92.0 $\pm$ 0.2
96.7 $\pm$ 0.1
HIST (reproduced)
80.7
91.8
96.4
Hence, my question is, if some hyper-params or the settled random seed work for Cars196, but could be not suitable to other two datasets? Could you give some help to reproduce the results as reported in the paper?
Thanks a lot in advance!
The text was updated successfully, but these errors were encountered:
Hi,
thanks for your nice paper! it is quite a good idea to use semantic loss and hyper-graph for metric learning.
when I tried to reproduce the performance on three datasets, I faced some issues, for the dataset Cars196 the performance is fine, but for CUB-200-2011 and SOP dataset, there is a margin difference between reproduced and reported results.
Below are some results I reproduced,
For Cars196, it works fine, the reproduced performance is consistent with the paper.
But for CUB-200-2011, compared to results reported in the paper, the margins on R@1, R@2 R@4 are, -1.2%, -0.9%, -0.5%:
And for SOP, compared to results reported in the paper, the margins on R@1, R@10, R@100 are, -0.5%, 0.0%, -0.1%
Hence, my question is, if some hyper-params or the settled random seed work for Cars196, but could be not suitable to other two datasets? Could you give some help to reproduce the results as reported in the paper?
Thanks a lot in advance!
The text was updated successfully, but these errors were encountered: