You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With the default config in the code, I ran the FB15k_TransE.py to train a best model, and I found the evaluation result of that model was really different from what the paper said. The default epochs in the code if 500, and the paper said that the model was trained at most 1000 epochs. But current result was even much better than the result in paper. Does the paper used the code in the current repo? Below is my result.
MICRO:
-- left >> mean: 229.41149, median: 23.0, hits@10: 37.377%
-- right >> mean: 160.86706, median: 14.0, hits@10: 45.088%
-- global >> mean: 195.13927, median: 18.0, hits@10: 41.233%
MACRO:
-- left >> mean: 106.30351, median: 83.18991, hits@10: 55.557%
-- right >> mean: 84.51045, median: 63.63632, hits@10: 63.104%
-- global >> mean: 95.40698, median: 33.58689, hits@10: 59.331%
The text was updated successfully, but these errors were encountered:
I got the same result. two questions: the parameters in the code is different from paper,which one is optimal? even the result of *.out file on the official github page is different frome the paper's result ,is it not funny?
With the default config in the code, I ran the FB15k_TransE.py to train a best model, and I found the evaluation result of that model was really different from what the paper said. The default epochs in the code if 500, and the paper said that the model was trained at most 1000 epochs. But current result was even much better than the result in paper. Does the paper used the code in the current repo? Below is my result.
MICRO:
MACRO:
The text was updated successfully, but these errors were encountered: