Capstone project on the course "C++ Developer. Professional".
Asynchronous multithreading inference server on boost beast/boost asio. Loads pre-trained face detection model UltraFace Onnx into TensorRT inference engine (TensorRT samples used as base) and streams frames with detections using Motion Jpeg over HTTP. The resulting video can be viewed using usual browser; multiple simultaneous requests are supported.
The project presentation (rus) can be found here.
The video demonstration can be found here.
-
Notifications
You must be signed in to change notification settings - Fork 1
foghegehog/inference-server
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Asynchronous multithreading inference server on TensorRT and boost beast/boost asio.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published