mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-03-07 02:18:07 +01:00
* Use re-usable inference request to reduce CPU usage * Share tensor * Don't count performance * Create openvino runner class * Break apart onnx runner * Add specific note about inability to use CUDA graphs for some models * Adjust rknn to use RKNNRunner * Use optimized runner * Add support for non-complex models for CudaExecutionProvider * Use core mask for rknn * Correctly handle cuda input * Cleanup * Sort imports