Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

在Fluid版本中,在预测的时候为什么还有克隆一个预测程序? #11022

Closed
yeyupiaoling opened this issue May 30, 2018 · 6 comments
Assignees
Labels
User 用于标记用户问题

Comments

@yeyupiaoling
Copy link
Contributor

在以下的例子中,我看到在预测的时候,还克隆了一个预测程序,为什么要这做呢?

inference_transpiler_program = inference_program.clone()
t = fluid.InferenceTranspiler()
t.transpile(inference_transpiler_program, place)

在此之前就有了一个预测程序,是从参数文件中加载的,如下:

[inference_program, feed_target_names,
fetch_targets] = fluid.io.load_inference_model(save_dirname, exe)

同时也不知道它要在这里干什么,应该是比较吧:

np.testing.assert_almost_equal(
results[0][i], transpiler_results[0][i], decimal=5)

在预测模型的时候,它为什么又要保存模型呢?

fluid.io.save_inference_model(save_dirname, feed_target_names,
fetch_targets, exe,
inference_transpiler_program)

@kuke kuke added the User 用于标记用户问题 label May 30, 2018
@kuke kuke self-assigned this May 30, 2018
@kuke
Copy link
Contributor

kuke commented May 30, 2018

# Use inference_transpiler to speedup

注意这一行的注释,用inference_transpiler_program的主要目的是加速计算,后面的比较也是为了验证inference_transpiler_program的正确性,最后保存的也是inference_transpiler_program

@yeyupiaoling
Copy link
Contributor Author

@kuke 都是同一个参数文件加载出来的程序,难道准确性还会不一样的?最后保存的参数文件跟之前的有会优化多少?

@luotao1
Copy link
Contributor

luotao1 commented May 31, 2018

@yeyupiaoling 因为两个program都在一个scope里,如果不clone的话,原始program在scope里的一些变量就直接发生了变化,会导致结果出错。更多细节可以看下 #9792 里的评论。

最后保存的参数文件跟之前的有会优化多少?

请问是指速度方面么?the elapsed time on ResNet (test_inference_image_classification) is from 11.2s to 9.3s, about 10% speedup on inference.

@yeyupiaoling
Copy link
Contributor Author

@luotao1

这个clone我不确定是不是还是必须的,我特意把这里注释了,单独跑例子也没有出现问题。不过这里多一遍clone也没有关系。

我没使用过clone的Program,但是我使用原始program也没见出错呢。

请问是指速度方面么?the elapsed time on ResNet (test_inference_image_classification) is from 11.2s to 9.3s, about 10% speedup on inference.

为什么会导致预测速度的提高呢?谢谢

@luotao1
Copy link
Contributor

luotao1 commented May 31, 2018

因为将batch norm的参数(weight和bias)都融合进conv的参数(weight和bias)中了,所以在预测的时候去掉了batch_norm(如果之前的conv没有bias,会增加一部分bias的计算),速度当然会快。

@yeyupiaoling
Copy link
Contributor Author

好的,了解了,谢谢 @luotao1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
User 用于标记用户问题
Projects
None yet
Development

No branches or pull requests

3 participants