-
Notifications
You must be signed in to change notification settings - Fork 230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate distributed inference into torchchat cli #1327
Conversation
Co-authored-by: vmpuri <[email protected]>
Setting numpy version to be the range required by gguf: https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/pyproject.toml
Co-authored-by: vmpuri <[email protected]>
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/1327
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 2d37d27 with merge base 7fe2c86 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@@ -476,18 +490,19 @@ def _maybe_parallelize_model( | |||
|
|||
|
|||
def _load_model(builder_args: BuilderArgs) -> Model: | |||
world_mesh, parallel_dims = _maybe_init_distributed(builder_args) | |||
# world_mesh, parallel_dims = _maybe_init_distributed(builder_args) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this code is now effectively dead and we should just remove it but a later PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks great! Thanks for adding this - great job with adding modularity while keeping it light and concise.
This PR is the first step towards integrating distributed inference into the torchchat CLI.
Currently only
torchchat.py generate
is supported.Test:
python torchchat.py generate llama3.1 --distributed --max-new-tokens 40 --prompt "What is Snow?"
Output: