Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support min/max carry over for eager mode from_float method #2046

Closed
wants to merge 1 commit into from

Conversation

kwanghoon-meta
Copy link
Contributor

Summary:
After QAT is completed or given pre-tuned weight observer via tunable PTQ algorithm, it should not over-write again with a given weight, at least for static QAT never.

Dynamic QAT also does not require to re-run weight observer again by design.

This is a fix

Differential Revision: D57747749

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 28, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57747749

kwanghoon-meta added a commit to kwanghoon-meta/torchrec that referenced this pull request May 28, 2024
…2046)

Summary:
X-link: pytorch/pytorch#127309


After QAT is completed or given pre-tuned weight observer via tunable PTQ algorithm, it should not over-write again with a given weight, at least for static QAT never.

Dynamic QAT also does not require to re-run weight observer again by design.

This is a fix

Differential Revision: D57747749
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57747749

kwanghoon-meta added a commit to kwanghoon-meta/torchrec that referenced this pull request May 28, 2024
…2046)

Summary:
X-link: pytorch/pytorch#127309


After QAT is completed or given pre-tuned weight observer via tunable PTQ algorithm, it should not over-write again with a given weight, at least for static QAT never.

Dynamic QAT also does not require to re-run weight observer again by design.

This is a fix

Differential Revision: D57747749
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57747749

kwanghoon-meta added a commit to kwanghoon-meta/torchrec that referenced this pull request May 28, 2024
…2046)

Summary:
X-link: pytorch/pytorch#127309


After QAT is completed or given pre-tuned weight observer via tunable PTQ algorithm, it should not over-write again with a given weight, at least for static QAT never.

Dynamic QAT also does not require to re-run weight observer again by design.

This is a fix

Differential Revision: D57747749
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57747749

kwanghoon-meta added a commit to kwanghoon-meta/torchrec that referenced this pull request May 28, 2024
…2046)

Summary:
X-link: pytorch/pytorch#127309


After QAT is completed or given pre-tuned weight observer via tunable PTQ algorithm, it should not over-write again with a given weight, at least for static QAT never.

Dynamic QAT also does not require to re-run weight observer again by design.

This is a fix

Differential Revision: D57747749
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57747749

kwanghoon-meta added a commit to kwanghoon-meta/torchrec that referenced this pull request May 28, 2024
…2046)

Summary:
X-link: pytorch/pytorch#127309


After QAT is completed or given pre-tuned weight observer via tunable PTQ algorithm, it should not over-write again with a given weight, at least for static QAT never.

Dynamic QAT also does not require to re-run weight observer again by design.

This is a fix

Differential Revision: D57747749
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57747749

kwanghoon-meta added a commit to kwanghoon-meta/torchrec that referenced this pull request May 28, 2024
…2046)

Summary:
X-link: pytorch/pytorch#127309


After QAT is completed or given pre-tuned weight observer via tunable PTQ algorithm, it should not over-write again with a given weight, at least for static QAT never.

Dynamic QAT also does not require to re-run weight observer again by design.

This is a fix

Differential Revision: D57747749
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57747749

kwanghoon-meta added a commit to kwanghoon-meta/torchrec that referenced this pull request May 28, 2024
…2046)

Summary:
X-link: pytorch/pytorch#127309


After QAT is completed or given pre-tuned weight observer via tunable PTQ algorithm, it should not over-write again with a given weight, at least for static QAT never.

Dynamic QAT also does not require to re-run weight observer again by design.

This is a fix

Differential Revision: D57747749
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57747749

kwanghoon-meta added a commit to kwanghoon-meta/torchrec that referenced this pull request May 29, 2024
…2046)

Summary:
X-link: pytorch/pytorch#127309


After QAT is completed or given pre-tuned weight observer via tunable PTQ algorithm, it should not over-write again with a given weight, at least for static QAT never.

Dynamic QAT also does not require to re-run weight observer again by design.

This is a fix

Reviewed By: jerryzh168

Differential Revision: D57747749
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57747749

kwanghoon-meta added a commit to kwanghoon-meta/torchrec that referenced this pull request May 29, 2024
…2046)

Summary:
X-link: pytorch/pytorch#127309


After QAT is completed or given pre-tuned weight observer via tunable PTQ algorithm, it should not over-write again with a given weight, at least for static QAT never.

Dynamic QAT also does not require to re-run weight observer again by design.

This is a fix

Reviewed By: jerryzh168

Differential Revision: D57747749
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57747749

kwanghoon-meta added a commit to kwanghoon-meta/torchrec that referenced this pull request May 29, 2024
…2046)

Summary:
X-link: pytorch/pytorch#127309


After QAT is completed or given pre-tuned weight observer via tunable PTQ algorithm, it should not over-write again with a given weight, at least for static QAT never.

Dynamic QAT also does not require to re-run weight observer again by design.

This is a fix

Reviewed By: jerryzh168

Differential Revision: D57747749
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57747749

…2046)

Summary:
X-link: pytorch/pytorch#127309


After QAT is completed or given pre-tuned weight observer via tunable PTQ algorithm, it should not over-write again with a given weight, at least for static QAT never.

Dynamic QAT also does not require to re-run weight observer again by design.

This is a fix

Reviewed By: jerryzh168

Differential Revision: D57747749
kwanghoon-meta added a commit to kwanghoon-meta/pytorch that referenced this pull request May 29, 2024
…127309)

Summary:

X-link: pytorch/torchrec#2046

After QAT is completed or given pre-tuned weight observer via tunable PTQ algorithm, it should not over-write again with a given weight, at least for static QAT never.

Dynamic QAT also does not require to re-run weight observer again by design.

This is a fix

Test Plan:
Signals
```
buck2 run caffe2/test:test_mobile_optimizer
```

Reviewed By: jerryzh168

Differential Revision: D57747749
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57747749

PaulZhang12 pushed a commit that referenced this pull request Jun 5, 2024
Summary:
X-link: pytorch/pytorch#127309

Pull Request resolved: #2046

After QAT is completed or given pre-tuned weight observer via tunable PTQ algorithm, it should not over-write again with a given weight, at least for static QAT never.

Dynamic QAT also does not require to re-run weight observer again by design.

This is a fix

Reviewed By: jerryzh168

Differential Revision: D57747749

fbshipit-source-id: 231937f64c6dc53cc79b35bb94534fdaa84e7da1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants