Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

static graph autogen code support for matmul op #54338

Merged
merged 9 commits into from
Jun 20, 2023
Merged

static graph autogen code support for matmul op #54338

merged 9 commits into from
Jun 20, 2023

Conversation

GreatV
Copy link
Contributor

@GreatV GreatV commented Jun 5, 2023

PR types

Others

PR changes

Others

Description

static graph autogen code support for matmul op

@paddle-bot
Copy link

paddle-bot bot commented Jun 5, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot paddle-bot bot added contributor External developers status: proposed labels Jun 5, 2023
@GreatV GreatV marked this pull request as draft June 6, 2023 16:46
@luotao1 luotao1 added the HappyOpenSource 快乐开源活动issue与PR label Jun 7, 2023
test/cpp/jit/CMakeLists.txt Show resolved Hide resolved
paddle/phi/api/yaml/op_compat.yaml Outdated Show resolved Hide resolved
paddle/phi/api/yaml/static_backward.yaml Show resolved Hide resolved
paddle/phi/api/yaml/op_compat.yaml Outdated Show resolved Hide resolved
paddle/phi/api/yaml/static_backward.yaml Show resolved Hide resolved
paddle/phi/api/yaml/op_compat.yaml Outdated Show resolved Hide resolved
@heavyrain-lzy heavyrain-lzy marked this pull request as ready for review June 8, 2023 05:00
@GreatV
Copy link
Contributor Author

GreatV commented Jun 9, 2023

把自动生成的组合算子部分,替换成原来的,单测就能通过

  • 自动生成的
class MatmulV2GradCompositeGradOpMaker : public prim::CompositeGradOpMakerBase {
 public:
  using prim::CompositeGradOpMakerBase::CompositeGradOpMakerBase;
  void Apply() override {
    //get inputs
    auto x = this->GetSingleForwardInput("X");
    auto y = this->GetSingleForwardInput("Y");
    auto grad_out = this->GetSingleForwardInput("grad_out");
    auto grad_x_grad = this->GetOptionalSingleOutputGrad("grad_x");
    auto grad_y_grad = this->GetOptionalSingleOutputGrad("grad_y");


    //get attr
    const bool transpose_x = this->Attr<bool>("trans_x");
    const bool transpose_y = this->Attr<bool>("trans_y");

    //get output
    auto x_grad_t = this->GetSingleInputGrad("X");
    auto y_grad_t = this->GetSingleInputGrad("Y");
    auto grad_out_grad_t = this->GetSingleInputGrad("grad_out");

    //get output ptr
    auto x_grad = this->GetOutputPtr(&x_grad_t);
    auto y_grad = this->GetOutputPtr(&y_grad_t);
    auto grad_out_grad = this->GetOutputPtr(&grad_out_grad_t);

    //get output orginal name
    auto x_grad_name = this->GetOutputName(x_grad_t);
    auto y_grad_name = this->GetOutputName(y_grad_t);
    auto grad_out_grad_name = this->GetOutputName(grad_out_grad_t);

    //call composite backward func
    VLOG(6) << "Runing matmul_double_grad composite func";
    prim::matmul_double_grad<prim::DescTensor>(x, y, grad_out, grad_x_grad, grad_y_grad, transpose_x, transpose_y, x_grad, y_grad, grad_out_grad);
    //recover output name
    this->RecoverOutputName(x_grad_t, x_grad_name);
    this->RecoverOutputName(y_grad_t, y_grad_name);
    this->RecoverOutputName(grad_out_grad_t, grad_out_grad_name);

  }
};
  • 原始的
class MatmulV2GradCompositeGradOpMaker : public prim::CompositeGradOpMakerBase {
 public:
  using prim::CompositeGradOpMakerBase::CompositeGradOpMakerBase;
  void Apply() override {
    // get inputs
    paddle::Tensor x = this->GetSingleForwardInput("X");
    paddle::Tensor y = this->GetSingleForwardInput("Y");
    paddle::Tensor dout =
        this->GetSingleForwardInput(framework::GradVarName("Out"));
    paddle::optional<paddle::Tensor> ddx =
        this->GetOptionalSingleOutputGrad(framework::GradVarName("X"));
    paddle::optional<paddle::Tensor> ddy =
        this->GetOptionalSingleOutputGrad(framework::GradVarName("Y"));

    // get attr
    bool trans_x = this->Attr<bool>("trans_x");
    bool trans_y = this->Attr<bool>("trans_y");

    // get output
    paddle::Tensor x_grad_t = this->GetSingleInputGrad("X");
    paddle::Tensor y_grad_t = this->GetSingleInputGrad("Y");
    paddle::Tensor grad_out_grad_t =
        this->GetSingleInputGrad(framework::GradVarName("Out"));

    // get output ptr
    paddle::Tensor* x_grad = this->GetOutputPtr(&x_grad_t);
    paddle::Tensor* y_grad = this->GetOutputPtr(&y_grad_t);
    paddle::Tensor* grad_out_grad = this->GetOutputPtr(&grad_out_grad_t);
    // get output orginal name
    std::string x_grad_name = this->GetOutputName(x_grad_t);
    std::string y_grad_name = this->GetOutputName(y_grad_t);
    std::string grad_out_grad_name = this->GetOutputName(grad_out_grad_t);
    VLOG(3) << "Runing matmul_double_grad composite func";
    // call composite backward func
    prim::matmul_double_grad<prim::DescTensor>(
        x, y, dout, ddx, ddy, trans_x, trans_y, x_grad, y_grad, grad_out_grad);
    // recover output name
    this->RecoverOutputName(x_grad_t, x_grad_name);
    this->RecoverOutputName(y_grad_t, y_grad_name);
    this->RecoverOutputName(grad_out_grad_t, grad_out_grad_name);
  }
};

@GreatV
Copy link
Contributor Author

GreatV commented Jun 9, 2023

不知道是哪出了问题

@heavyrain-lzy
Copy link
Contributor

不知道是哪出了问题

组合算子的高阶反向问题已经反馈给相关人员,预计下周可以解决

@heavyrain-lzy
Copy link
Contributor

@GreatV 可以参考这个PR #54666 ,修改组合算子问题,重新调试。

@GreatV
Copy link
Contributor Author

GreatV commented Jun 15, 2023

@GreatV 可以参考这个PR #54666 ,修改组合算子问题,重新调试。

好的

@GreatV
Copy link
Contributor Author

GreatV commented Jun 16, 2023

@heavyrain-lzy 麻烦再次review一下。

@GreatV GreatV requested a review from heavyrain-lzy June 16, 2023 05:08
@heavyrain-lzy
Copy link
Contributor

目前看应该没什么问题,请rerun一下coverage

@GreatV
Copy link
Contributor Author

GreatV commented Jun 18, 2023

@heavyrain-lzy coverage 我 rerun 不了,没有权限。

@GreatV
Copy link
Contributor Author

GreatV commented Jun 18, 2023

没有找到 fused_matmul_op 的单测

@GreatV GreatV requested a review from heavyrain-lzy June 19, 2023 06:50
@GreatV
Copy link
Contributor Author

GreatV commented Jun 20, 2023

@heavyrain-lzy 这个也麻烦再次review一下

@heavyrain-lzy
Copy link
Contributor

目前已经把test_trt_convert_gelu禁掉了,需要重新把coverage跑一下

@GreatV
Copy link
Contributor Author

GreatV commented Jun 20, 2023

coverage 跑完了,主要是fused_matmul_op.cc覆盖率不足

Copy link
Contributor

@heavyrain-lzy heavyrain-lzy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@luotao1 luotao1 merged commit ad80fbf into PaddlePaddle:develop Jun 20, 2023
@GreatV GreatV deleted the autogen_code_support_for_matmul branch July 2, 2023 15:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers HappyOpenSource 快乐开源活动issue与PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants