mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-02-20 13:54:36 +01:00
* Use re-usable inference request to reduce CPU usage * Share tensor * Don't count performance * Create openvino runner class * Break apart onnx runner * Add specific note about inability to use CUDA graphs for some models * Adjust rknn to use RKNNRunner * Use optimized runner * Add support for non-complex models for CudaExecutionProvider * Use core mask for rknn * Correctly handle cuda input * Cleanup * Sort imports |
||
|---|---|---|
| .. | ||
| onnx | ||
| __init__.py | ||
| embeddings.py | ||
| maintainer.py | ||
| util.py | ||