mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-09-23 17:52:05 +02:00
* Use re-usable inference request to reduce CPU usage * Share tensor * Don't count performance * Create openvino runner class * Break apart onnx runner * Add specific note about inability to use CUDA graphs for some models * Adjust rknn to use RKNNRunner * Use optimized runner * Add support for non-complex models for CudaExecutionProvider * Use core mask for rknn * Correctly handle cuda input * Cleanup * Sort imports |
||
---|---|---|
.. | ||
base_embedding.py | ||
face_embedding.py | ||
jina_v1_embedding.py | ||
jina_v2_embedding.py | ||
lpr_embedding.py |