* ROCm AMD/GPU based build and detector, WIP
* detectors/rocm: separate yolov8 postprocessing into own function; fix box scaling; use cv2.dnn.blobForImage for preprocessing; assert on required model parameters
* AMD/ROCm: add couple of more ultralytics models; comments
* docker/rocm: make imported model files readable by all
* docker/rocm: readme about running on AMD GPUs
* docker/rocm: updated README
* docker/rocm: updated README
* docker/rocm: updated README
* detectors/rocm: separated preprocessing functions into yolo_utils.py
* detector/plugins: added onnx cpu plugin
* docker/rocm: updated container with limite label sets
* example detectors view
* docker/rocm: updated README.md
* docker/rocm: update README.md
* docker/rocm: do not set HSA_OVERRIDE_GFX_VERSION at all for the general version as the empty value broke rocm
* detectors: simplified/optimized yolov8_postprocess
* detector/yolo_utils: indentation, remove unused variable
* detectors/rocm: default option to conserve cpu usage at the expense of latency
* detectors/yolo_utils: use nms to prefilter overlapping boxes if too many detected
* detectors/edgetpu_tfl: add support for yolov8
* util/download_models: script to download yolov8 model files
* docker/main: add download-models overlay into s6 startup
* detectors/rocm: assume models are in /config/model_cache/yolov8/
* docker/rocm: compile onnx files into mxr files at startup
* switch model download into bash script
* detectors/rocm: automatically override HSA_OVERRIDE_GFX_VERSION for couple of known chipsets
* docs: rocm detector first notes
* typos
* describe builds (harakas temporary)
* docker/rocm: also build a version for gfx1100
* docker/rocm: use cp instead of tar
* docker.rocm: remove README as it is now in detector config
* frigate/detectors: renamed yolov8_preprocess->preprocess, pass input tensor element type
* docker/main: use newer openvino (2023.3.0)
* detectors: implement class aggregation
* update yolov8 model
* add openvino/yolov8 support for label aggregation
* docker: remove pointless s6/timeout-up files
* Revert "detectors: implement class aggregation"
This reverts commit dcfe6bbf6f.
* detectors/openvino: remove class aggregation
* detectors: increase yolov8 postprocessing score trershold to 0.5
* docker/rocm: separate rocm distributed files into its own build stage
* Update object_detectors.md
* updated CODEOWNERS file for rocm
* updated build names for documentation
* Revert "docker/main: use newer openvino (2023.3.0)"
This reverts commit dee95de908.
* reverrted openvino detector
* reverted edgetpu detector
* scratched rocm docs from any mention of edgetpu or openvino
* Update docs/docs/configuration/object_detectors.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* renamed frigate.detectors.yolo_utils.py -> frigate.detectors.util.py
* clarified rocm example performance
* Improved wording and clarified text
* Mentioned rocm detector for AMD GPUs
* applied ruff formating
* applied ruff suggested fixes
* docker/rocm: fix missing argument resulting in larger docker image sizes
* docs/configuration/object_detectors: fix links to yolov8 release files
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* support for other yolov models and config checks
* apply code formatting
* Information about core mask and inference speed
* update rknn postprocess and remove params
* update model selection
* Apply suggestions from code review
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* support rknn on all socs
* apply changes from review and fix post process bug
* apply code formatting
* update tip in object_detectors docs
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* support for other yolov models and config checks
* apply code formatting
* Information about core mask and inference speed
* update rknn postprocess and remove params
* update model selection
* Apply suggestions from code review
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update to latest tensorrt (8.6.1) release
* Build trt libyolo_layer.so in container
* Update tensorrt_models script to convert models from the frigate container
* Fix typo in model script
* Fix paths to yolo lib and models folder
* Add S6 scripts to test and convert specified TensortRT models at startup.
Rearrange tensorrt files into a docker support folder.
* Update TensorRT documentation to reflect the new model conversion process and minimum HW support.
* Fix model_cache path to live in config directory
* Move tensorrt s6 files to the correct directory
* Fix issues in model generation script
* Disable global timeout for s6 services
* Add version folder to tensorrt model_cache path
* Include TensorRT version 8.5.3
* Add numpy requirement prior to removal of np.bool
* This TRT version uses a mixture of cuda dependencies
* Redirect stdout from noisy model conversion
* Add isort and ruff linter
Both linters are pretty common among modern python code bases.
The isort tool provides stable sorting and grouping, as well as pruning
of unused imports.
Ruff is a modern linter, that is very fast due to being written in rust.
It can detect many common issues in a python codebase.
Removes the pylint dev requirement, since ruff replaces it.
* treewide: fix issues detected by ruff
* treewide: fix bare except clauses
* .devcontainer: Set up isort
* treewide: optimize imports
* treewide: apply black
* treewide: make regex patterns raw strings
This is necessary for escape sequences to be properly recognized.
* Add Deepstack detector plugin with configurable API URL, timeout, and API key
* Update DeepStack plugin to recognize 'truck' as 'car' for label indexing
* Add debug logging to DeepStack plugin for better monitoring and troubleshooting
* Refactor DeepStack label loading from file to use merged labelmap
* Black format
* add documentation draft
* fix link to codeproject website
* Apply suggestions from code review
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
It supports the same entrypoints, given that tflite is a small cut-out
of the big tensorflow picture.
This patch was created for downstream usage in nixpkgs, where we don't
have the tflite python package, but do have the full tensorflow package.
* Initial commit that adds YOLOv5 and YOLOv8 support for OpenVINO detector
* Fixed double inference bug with YOLOv5 and YOLOv8
* Modified documentation to mention YOLOv5 and YOLOv8
* Changes to pass lint checks
* Change minimum threshold to improve model performance
* Fix link
* Clean up YOLO post-processing
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Initial commit to enable Yolox models with OpenVINO in Frigate
* Fix ModelEnumtType import error in openvino.py
* Initial edit of the docs to include verbage about yolox
* Initial edit of the docs to include verbage about yolox
* Elaborate configuration and limitations in docs.
* Add capability to dynamically determine number of classes in yolox model
* Further refinements
* Removed unnecesarry comments, improved documentation, addressed PR items
* Fixed lint formatting issues
* Initial WIP dockerfile and scripts to add tensorrt support
* Add tensorRT detector
* WIP attempt to install TensorRT 8.5
* Updates to detector for cuda python library
* TensorRT Cuda library rework WIP
Does not run
* Fixes from rebase to detector factory
* Fix parsing output memory pointer
* Handle TensorRT logs with the python logger
* Use non-async interface and convert input data to float32. Detection runs without error.
* Make TensorRT a separate build from the base Frigate image.
* Add script and documentation for generating TRT Models
* Add support for TensorRT devcontainer
* Add labelmap to trt model script and docs. Cleanup of old scripts.
* Update detect to normalize input tensor using model input type
* Add config for selecting GPU. Fix Async inference. Update documentation.
* Update some CUDA libraries to clean up version warning
* Add CI stage to build TensorRT tag
* Add note in docs for image tag and model support