-
Notifications
You must be signed in to change notification settings - Fork 791
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NVIDIA Triton Inference server documentation #3291
Comments
/assign |
Hi, |
/close |
@varodrig: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@mpietrzy we are closing this issue given that the information was provided. thanks @tarekabouzeid |
Your website advertised NVIDIA's Triton Inference server as one of the supported deployable platforms using the Seldon Core integration within Kubeflow. When I click on the documentation link, it's out of date. The ability to support this deployment scheme is a critical part of our requirements evaluation. Please advise as to any updated documentation.
The text was updated successfully, but these errors were encountered: