Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[New Port Request] Nvidia Triton Inference Server client #43186

Open
Thieum opened this issue Jan 9, 2025 · 0 comments
Open

[New Port Request] Nvidia Triton Inference Server client #43186

Thieum opened this issue Jan 9, 2025 · 0 comments
Labels
category:new-port The issue is requesting a new library to be added; consider making a PR! info:good-first-issue This issue would be a good issue to get one's feet wet in solving.

Comments

@Thieum
Copy link
Contributor

Thieum commented Jan 9, 2025

Library name

Nvidia Triton Inference Server client

Library description

NVIDIA Triton Inference Server provides a cloud and edge inferencing solution optimized for both CPUs and GPUs. Nvidia provides a c++ client.

Source repository URL

https://github.com/triton-inference-server/client

Project homepage (if different from the source repository)

https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html

Anything else that is useful to know when adding (such as optional features the library may have that should be included)

No response

@Thieum Thieum added category:new-port The issue is requesting a new library to be added; consider making a PR! info:good-first-issue This issue would be a good issue to get one's feet wet in solving. labels Jan 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category:new-port The issue is requesting a new library to be added; consider making a PR! info:good-first-issue This issue would be a good issue to get one's feet wet in solving.
Projects
None yet
Development

No branches or pull requests

1 participant