Skip to content

Asynchronous multithreading inference server on TensorRT and boost beast/boost asio.

Notifications You must be signed in to change notification settings

foghegehog/inference-server

Repository files navigation

TensorRT inference server

Capstone project on the course "C++ Developer. Professional".
Asynchronous multithreading inference server on boost beast/boost asio. Loads pre-trained face detection model UltraFace Onnx into TensorRT inference engine (TensorRT samples used as base) and streams frames with detections using Motion Jpeg over HTTP. The resulting video can be viewed using usual browser; multiple simultaneous requests are supported.
The project presentation (rus) can be found here.
The video demonstration can be found here.

About

Asynchronous multithreading inference server on TensorRT and boost beast/boost asio.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published