Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support passing in compute kernel in table_wise sharding helper #2087

Closed
wants to merge 1 commit into from

Conversation

sarckk
Copy link
Member

@sarckk sarckk commented Jun 7, 2024

Summary: Currently assumes compute kernel is QUANT when passing in device, which isn't very flexible. Makes it take in compute kernel explicitly

Reviewed By: gnahzg

Differential Revision: D58254737

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 7, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D58254737

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D58254737

@sarckk sarckk force-pushed the export-D58254737 branch from c132754 to 1383618 Compare June 8, 2024 00:39
sarckk added a commit to sarckk/torchrec that referenced this pull request Jun 8, 2024
…rch#2087)

Summary:
Pull Request resolved: pytorch#2087

Currently assumes compute kernel is QUANT when passing in device, which isn't very flexible. Makes it take in compute kernel explicitly

Reviewed By: gnahzg

Differential Revision: D58254737
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D58254737

sarckk added a commit to sarckk/torchrec that referenced this pull request Jun 11, 2024
…rch#2087)

Summary:
Pull Request resolved: pytorch#2087

Currently assumes compute kernel is QUANT when passing in device, which isn't very flexible. Makes it take in compute kernel explicitly

Reviewed By: gnahzg

Differential Revision: D58254737
@sarckk sarckk force-pushed the export-D58254737 branch from 1383618 to 30dcbb2 Compare June 11, 2024 02:05
…rch#2087)

Summary:
Pull Request resolved: pytorch#2087

Currently assumes compute kernel is QUANT when passing in device, which isn't very flexible. Makes it take in compute kernel explicitly

Reviewed By: gnahzg

Differential Revision: D58254737
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D58254737

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants