Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproducing the results in the Paper #6

Open
thegialeo opened this issue Jul 28, 2018 · 8 comments
Open

Reproducing the results in the Paper #6

thegialeo opened this issue Jul 28, 2018 · 8 comments

Comments

@thegialeo
Copy link

thegialeo commented Jul 28, 2018

Dear Spyros Gidaris, Praveer Singh and Nikos Komodakis,

i have read your paper "Unsupervised Representation Learning by Predicting Image Rotations" and was impressed by your work and the astonishing results receive by pretraining a "RotNet" on the rotation task and later train classifiers on top of the feature maps.

I have downloaded your code from GitHub and tried to reproduce the values in Table 1 for a RotNet with 4 conv. blocks. However, running "run_cifar10_based_unsupervised_experiments.sh" and altering line 33 and for 'conv1' also line 31 in the config file "CIFAR10_MultLayerClassifier_on_RotNet_NIN4blocks_Conv2_feats.py", i obtain slightly lower values than in the paper especially for the fourth block:

Rotation Task: 93,65 (Running your Code) / ---
ConvBlock1: 84,65 (Running your Code) / 85,07 (Paper)
ConvBlock2: 88,89 (Running your Code) / 89,06 (Paper)
ConvBlock3: 85,88 (Running your Code) / 86,21 (Paper)
ConvBlock4: 54,04 (Running your Code) / 61,73 (Paper)

Are there further things I need to consider before running the code to achieve the results in the paper? I have used a GeForce GPX 1070 to run the experiment.

@Pengyujiao
Copy link

Excuse me, I am a beginner, I want to ask you a question: the author omitted the detail code of function train_step, evaluation_step, without this part of the code, is not training? Did you complete it? Looking forward to your reply. Thank you @Xenovortex

@thegialeo
Copy link
Author

thegialeo commented Jul 29, 2019

It is a year ago, so I don't remember in detail. But to my knowledge, those functions are implemented and it should work out of the box. Maybe I didn't exactly understand your question/problem.

Also I did my own implementation of the FeatureRotNet (based on the knowledge provided in the paper), if you need some reference you can have a look at it, but i am pretty sure their original code should work as well.

@Pengyujiao
Copy link

Sorry, I didn't look at the code carefully before. I have studied it carefully these two days and found that the code can work successfully.Thanks for your reply. @Xenovortex

@thegialeo
Copy link
Author

no problem

@Chen-Song
Copy link

Hi, @Xenovortex .
Have you reproduced supervised NIN? I have reproduced supervised NIN as the paper setting, which is 88.23 in my experimental results, but the result of the paper report is 92.80.
Thank you.

@thegialeo
Copy link
Author

Yes, I did reproduced the supervised NIN. With my code (https://github.com/Xenovortex/Implementation-FeatureLearningRotNet), I achieved slightly lower accuracy than in the paper but pretty close:

Total Accuracy: 91.39 %

Accuracy by classes:
plane: 91.60%
car: 95.90%
bird: 87.00%
cat: 83.20%
deer: 92.10%
dog: 85.50%
frog: 94.80%
horse: 93.90%
ship: 94.70%
truck: 95.20%

However, running the code of the authors, I got 92.81%.

@Chen-Song
Copy link

Yes, I did reproduced the supervised NIN. With my code (https://github.com/Xenovortex/Implementation-FeatureLearningRotNet), I achieved slightly lower accuracy than in the paper but pretty close:

Total Accuracy: 91.39 %

Accuracy by classes:
plane: 91.60%
car: 95.90%
bird: 87.00%
cat: 83.20%
deer: 92.10%
dog: 85.50%
frog: 94.80%
horse: 93.90%
ship: 94.70%
truck: 95.20%

However, running the code of the authors, I got 92.81%.

Thank you for your reply.
The num_stages is 3 in the author code of supervised NIN (that is nine convolutional layers), but num_stages is 4 in the unsupervised NIN architecture (that is twelve convolutional layers), so I am a bit confused. Such unsupervised results and supervised results may be a bit unfair.

@thegialeo
Copy link
Author

I mean, if you think it is an unfair comparison. Then, there is really nothing stopping you from changing num_stages to 3 or 4 respectively and run the authors code to see if it makes a difference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants