Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

questions about training process of batch normalize layer #5

Open
appleleaves opened this issue Dec 6, 2017 · 1 comment
Open

questions about training process of batch normalize layer #5

appleleaves opened this issue Dec 6, 2017 · 1 comment

Comments

@appleleaves
Copy link

there are some statements needed to make sure:

  1. while the weighting and bias are fixed at 1 and 0 respectfully,### the mean and var in it should be learned in the training process. Right?

2.the author state in the paper that the bn layer before DIF is not learned. Then it is the mean and var are calculated every time for different input?

@yuruntian
Copy link
Owner

@appleleaves for all the bn layers the affine parmameters are fixed at 1 and 0, but the mean and var are learned.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants