-
Notifications
You must be signed in to change notification settings - Fork 403
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
batch_normalization 为啥不传入training字段啊, #36
Comments
bn1 = tf.layers.batch_normalization(inputs=inp, name='bn1', training=train_or_test) |
我知道这个,我的问题是作者给出的源码中没有你写的这些代码啊,正常在训练的时候, |
@WangLianChen 请问添加完这些代码后,是否能正常运行? 我这添加了这几行代码,出现Segmentation fault。 |
应该是写错了,不加入is_training,用了测试集真实的特征分布,效果会比真实效果好一些; |
确实。。我加上training=true后,结果反而下降了。很迷惑 |
@lihairui1990 If the training argument is not set, its default value is False. Then in this repo, the BN layer always has mean=0 and variance=1 which can not be updated. This problem also exists in the original DIN repo https://github.com/zhougr1993/DeepInterestNetwork/blob/master/din/model.py#L49. |
@Waydrow Have you reproduced results in the DIEN paper or README.md (of https://github.com/zhougr1993/DeepInterestNetwork)? Using the original DIN code https://github.com/zhougr1993/DeepInterestNetwork, I only got 0.8677 GAUC for DIN on Amazon Electronics dataset, a bit less than the reported 0.8698 (please refer to zhougr1993/DeepInterestNetwork#92). |
@liyangliu I failed too...lol |
tf.layers.batch_normalization(inputs=in_, name='bn1' + stag, reuse=tf.AUTO_REUSE)
这里为啥不传入training 这个字段啊,之前看batch_normalization都需要传入training字段啊,训练时传入True 测试时传入False
The text was updated successfully, but these errors were encountered: