Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NVIDIA Triton Inference server documentation #3291

Closed
mpietrzy opened this issue Jun 29, 2022 · 5 comments
Closed

NVIDIA Triton Inference server documentation #3291

mpietrzy opened this issue Jun 29, 2022 · 5 comments
Assignees

Comments

@mpietrzy
Copy link

Your website advertised NVIDIA's Triton Inference server as one of the supported deployable platforms using the Seldon Core integration within Kubeflow. When I click on the documentation link, it's out of date. The ability to support this deployment scheme is a critical part of our requirements evaluation. Please advise as to any updated documentation.

@tarekabouzeid
Copy link
Member

/assign

@varodrig
Copy link
Contributor

/close

Copy link

@varodrig: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@varodrig
Copy link
Contributor

@mpietrzy we are closing this issue given that the information was provided. thanks @tarekabouzeid

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants