* Cleanup onnx detector
* Fix
* Fix classification cropping
* Deprioritize openvino
* Send model type
* Use model type to decide if model can use full optimization
* Clenanup
* Cleanup
* Use OpenVINO directly to detect if devices are available
* Cleanup
* Update OpenVINO
* Cleanup
* Don't try to use OpenVINO when CPU is set as device
* Catch case where input tensor can't be pre-defined
* Cleanup
* Use re-usable inference request to reduce CPU usage
* Share tensor
* Don't count performance
* Create openvino runner class
* Break apart onnx runner
* Add specific note about inability to use CUDA graphs for some models
* Adjust rknn to use RKNNRunner
* Use optimized runner
* Add support for non-complex models for CudaExecutionProvider
* Use core mask for rknn
* Correctly handle cuda input
* Cleanup
* Sort imports