Files
blakeblackshear.frigate/frigate
Nicolas Mowen 81d7c47129 Optimize OpenVINO and ONNX Model Runners (#20063)
* Use re-usable inference request to reduce CPU usage

* Share tensor

* Don't count performance

* Create openvino runner class

* Break apart onnx runner

* Add specific note about inability to use CUDA graphs for some models

* Adjust rknn to use RKNNRunner

* Use optimized runner

* Add support for non-complex models for CudaExecutionProvider

* Use core mask for rknn

* Correctly handle cuda input

* Cleanup

* Sort imports
2025-09-14 06:22:22 -06:00
..
2025-08-26 08:11:37 -05:00
2025-08-16 10:20:33 -05:00
2025-08-20 14:45:17 -06:00
2025-08-19 11:08:34 -05:00
2025-08-16 10:20:33 -05:00
2025-05-13 08:27:20 -06:00
2025-08-22 06:42:36 -06:00