Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CINN] Add new group scheduler #56444

Merged
merged 54 commits into from
Nov 2, 2023
Merged

Conversation

BiynXu
Copy link
Contributor

@BiynXu BiynXu commented Aug 18, 2023

PR types

New features

PR changes

Others

Description

card-74457

Add GroupScheduler to schedule fusion groups.
Its responsibility is to perform loop alignment, automatic inline, automatic loop fusion, and optimize the storage location of intermediate variables.

@paddle-bot
Copy link

paddle-bot bot commented Aug 18, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-ci-bot
Copy link

paddle-ci-bot bot commented Aug 26, 2023

Sorry to inform you that 1532568's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually.

@paddle-ci-bot
Copy link

paddle-ci-bot bot commented Sep 8, 2023

Sorry to inform you that 0763775's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually.

@paddle-ci-bot
Copy link

paddle-ci-bot bot commented Oct 18, 2023

Sorry to inform you that 07f5674's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually.

zhhsplendid
zhhsplendid previously approved these changes Nov 1, 2023
Copy link
Member

@zhhsplendid zhhsplendid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Member

@zhhsplendid zhhsplendid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@XiaoguangHu01 XiaoguangHu01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ZzSean ZzSean merged commit ca14e38 into PaddlePaddle:develop Nov 2, 2023
zeroRains pushed a commit to zeroRains/Paddle that referenced this pull request Nov 8, 2023
* [CINN] Add new group scheduler

* [Fix] Fix priority of bind loops and output node

* [Fix] Set is_reduce_axis for Argmin and Argmax

* [Fix] Add producer consumer relation to Argmax

* [Fix] Add NodePriority and skip ExternCall block

* [Fix] Add prohibit schedule block

* [Fix] schedule block graph test

* [Fix] Skip external calls while auto inline

* [Fix] Fix relationship of block with Call nodes

* [CINN] Use common reduce while loop fusion

* [Fix] ScheduleBlockGraph unittest

* [Fix] reduction unittests

* [Fix] Skip group schedule of NonFusible nodes

* [Fix] Incomplete AutoSimplify

* [Fix] Adapt to new GraphCompilers

* [Fix] schedule block graph unittest

* [Fix] loop reorder to match master

* [Fix] elementwise loop alignment

* [Fix] cuda axis coeff and range

* [Fix] Add conditions to schedules related to cuda

* [Fix] fix conflict

* fix conflict

* [Fix] fix conflict

* Integrate ReductionFactoring

* [CINN] Upgrade ReductionReduction rule

* resolve conflict

* fix tensor in wb-block

* add reduce type in FactorizeReduction

* [CINN] Add cross thread reduction replacer

* Integrate cross thread reduction

* add anonymous namespace

* fix reduction factoring unittest

* Prohibit group schedule on single op

* Revert "Prohibit group schedule on single op"

This reverts commit 13ddff9.

* fix reduction factoring unittest

* fix reduction factoring unittest

* open group scheduler flag

* fix node priority

* fix cross thread reduction on cpu

* fix reduction_factoring with pre Fuse

* Revert "open group scheduler flag"

This reverts commit 192ccc1.

* Revert "fix reduction_factoring with pre Fuse"

This reverts commit 31889eb.

* simplify log of range

* add a TODO

* fix x86 reduction bug
danleifeng pushed a commit to danleifeng/Paddle that referenced this pull request Nov 14, 2023
* [CINN] Add new group scheduler

* [Fix] Fix priority of bind loops and output node

* [Fix] Set is_reduce_axis for Argmin and Argmax

* [Fix] Add producer consumer relation to Argmax

* [Fix] Add NodePriority and skip ExternCall block

* [Fix] Add prohibit schedule block

* [Fix] schedule block graph test

* [Fix] Skip external calls while auto inline

* [Fix] Fix relationship of block with Call nodes

* [CINN] Use common reduce while loop fusion

* [Fix] ScheduleBlockGraph unittest

* [Fix] reduction unittests

* [Fix] Skip group schedule of NonFusible nodes

* [Fix] Incomplete AutoSimplify

* [Fix] Adapt to new GraphCompilers

* [Fix] schedule block graph unittest

* [Fix] loop reorder to match master

* [Fix] elementwise loop alignment

* [Fix] cuda axis coeff and range

* [Fix] Add conditions to schedules related to cuda

* [Fix] fix conflict

* fix conflict

* [Fix] fix conflict

* Integrate ReductionFactoring

* [CINN] Upgrade ReductionReduction rule

* resolve conflict

* fix tensor in wb-block

* add reduce type in FactorizeReduction

* [CINN] Add cross thread reduction replacer

* Integrate cross thread reduction

* add anonymous namespace

* fix reduction factoring unittest

* Prohibit group schedule on single op

* Revert "Prohibit group schedule on single op"

This reverts commit 13ddff9.

* fix reduction factoring unittest

* fix reduction factoring unittest

* open group scheduler flag

* fix node priority

* fix cross thread reduction on cpu

* fix reduction_factoring with pre Fuse

* Revert "open group scheduler flag"

This reverts commit 192ccc1.

* Revert "fix reduction_factoring with pre Fuse"

This reverts commit 31889eb.

* simplify log of range

* add a TODO

* fix x86 reduction bug
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants