-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproducing the results in the Paper #6
Comments
Excuse me, I am a beginner, I want to ask you a question: the author omitted the detail code of function train_step, evaluation_step, without this part of the code, is not training? Did you complete it? Looking forward to your reply. Thank you @Xenovortex |
It is a year ago, so I don't remember in detail. But to my knowledge, those functions are implemented and it should work out of the box. Maybe I didn't exactly understand your question/problem. Also I did my own implementation of the FeatureRotNet (based on the knowledge provided in the paper), if you need some reference you can have a look at it, but i am pretty sure their original code should work as well. |
Sorry, I didn't look at the code carefully before. I have studied it carefully these two days and found that the code can work successfully.Thanks for your reply. @Xenovortex |
no problem |
Hi, @Xenovortex . |
Yes, I did reproduced the supervised NIN. With my code (https://github.com/Xenovortex/Implementation-FeatureLearningRotNet), I achieved slightly lower accuracy than in the paper but pretty close: Total Accuracy: 91.39 % Accuracy by classes: However, running the code of the authors, I got 92.81%. |
Thank you for your reply. |
I mean, if you think it is an unfair comparison. Then, there is really nothing stopping you from changing num_stages to 3 or 4 respectively and run the authors code to see if it makes a difference. |
Dear Spyros Gidaris, Praveer Singh and Nikos Komodakis,
i have read your paper "Unsupervised Representation Learning by Predicting Image Rotations" and was impressed by your work and the astonishing results receive by pretraining a "RotNet" on the rotation task and later train classifiers on top of the feature maps.
I have downloaded your code from GitHub and tried to reproduce the values in Table 1 for a RotNet with 4 conv. blocks. However, running "run_cifar10_based_unsupervised_experiments.sh" and altering line 33 and for 'conv1' also line 31 in the config file "CIFAR10_MultLayerClassifier_on_RotNet_NIN4blocks_Conv2_feats.py", i obtain slightly lower values than in the paper especially for the fourth block:
Rotation Task: 93,65 (Running your Code) / ---
ConvBlock1: 84,65 (Running your Code) / 85,07 (Paper)
ConvBlock2: 88,89 (Running your Code) / 89,06 (Paper)
ConvBlock3: 85,88 (Running your Code) / 86,21 (Paper)
ConvBlock4: 54,04 (Running your Code) / 61,73 (Paper)
Are there further things I need to consider before running the code to achieve the results in the paper? I have used a GeForce GPX 1070 to run the experiment.
The text was updated successfully, but these errors were encountered: