We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llama.onnx is primarily used for understanding LLM and converting it to NPU.
llama.onnx
If you are looking for inference on Nvidia GPU, we have released lmdeploy at https://github.com/InternLM/lmdeploy.
It supports:
The text was updated successfully, but these errors were encountered:
#19 #16 #15
Sorry, something went wrong.
#22 #15
Tensor parallelism
Nice work! Can tensor parallelism be implemented using both Torch and ONNX models?
No branches or pull requests
llama.onnx
is primarily used for understanding LLM and converting it to NPU.If you are looking for inference on Nvidia GPU, we have released lmdeploy at https://github.com/InternLM/lmdeploy.
It supports:
The text was updated successfully, but these errors were encountered: