Add support for YoloV9 to OpenVINO (#16495)

* Add support for yolov9 to OpenVINO

* Cleanup detector docs

* Fix link
This commit is contained in:
Nicolas Mowen 2025-02-11 11:23:19 -07:00 committed by GitHub
parent a3ede3cf8a
commit 0f0b2687af
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 70 additions and 35 deletions

View File

@ -201,15 +201,7 @@ This detector also supports YOLOX. Frigate does not come with any YOLOX models p
#### YOLO-NAS
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
:::warning
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
:::
The input image size in this notebook is set to 320x320. This results in lower CPU usage and faster inference times without impacting performance in most cases due to the way Frigate crops video frames to areas of interest before running detection. The notebook and config can be updated to 640x640 if desired.
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. See [the models section](#downloading-yolo-nas-model) for more information on downloading the YOLO-NAS model for use in Frigate.
After placing the downloaded onnx model in your config folder, you can use the following configuration:
@ -231,6 +223,36 @@ model:
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
#### YOLOv9
[YOLOv9](https://github.com/MultimediaTechLab/YOLO) models are supported, but not included by default.
:::tip
The YOLOv9 detector has been designed to support YOLOv9 models, but may support other YOLO model architectures as well.
:::
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
detectors:
ov:
type: openvino
device: GPU
model:
model_type: yolov9
width: 640 # <--- should match the imgsize set during model export
height: 640 # <--- should match the imgsize set during model export
input_tensor: nchw
input_dtype: float
path: /config/model_cache/yolov9-t.onnx
labelmap_path: /labelmap/coco-80.txt
```
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
## NVidia TensorRT Detector
Nvidia GPUs may be used for object detection using the TensorRT libraries. Due to the size of the additional libraries, this detector is only provided in images with the `-tensorrt` tag suffix, e.g. `ghcr.io/blakeblackshear/frigate:stable-tensorrt`. This detector is designed to work with Yolo models for object detection.
@ -265,6 +287,8 @@ If your GPU does not support FP16 operations, you can pass the environment varia
Specific models can be selected by passing an environment variable to the `docker run` command or in your `docker-compose.yml` file. Use the form `-e YOLO_MODELS=yolov4-416,yolov4-tiny-416` to select one or more model names. The models available are shown below.
<details>
<summary>Available Models</summary>
```
yolov3-288
yolov3-416
@ -293,6 +317,7 @@ yolov7-320
yolov7x-640
yolov7x-320
```
</details>
An example `docker-compose.yml` fragment that converts the `yolov4-608` and `yolov7x-640` models for a Pascal card would look something like this:
@ -420,15 +445,7 @@ There is no default model provided, the following formats are supported:
#### YOLO-NAS
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
:::warning
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
:::
The input image size in this notebook is set to 320x320. This results in lower CPU usage and faster inference times without impacting performance in most cases due to the way Frigate crops video frames to areas of interest before running detection. The notebook and config can be updated to 640x640 if desired.
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. See [the models section](#downloading-yolo-nas-model) for more information on downloading the YOLO-NAS model for use in Frigate.
After placing the downloaded onnx model in your config folder, you can use the following configuration:
@ -490,15 +507,7 @@ There is no default model provided, the following formats are supported:
#### YOLO-NAS
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
:::warning
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
:::
The input image size in this notebook is set to 320x320. This results in lower CPU usage and faster inference times without impacting performance in most cases due to the way Frigate crops video frames to areas of interest before running detection. The notebook and config can be updated to 640x640 if desired.
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. See [the models section](#downloading-yolo-nas-model) for more information on downloading the YOLO-NAS model for use in Frigate.
After placing the downloaded onnx model in your config folder, you can use the following configuration:
@ -716,4 +725,24 @@ Explanation of the paramters:
- `soc`: the SoC this model was build for (e.g. "rk3588")
- `tk_version`: Version of `rknn-toolkit2` (e.g. "2.3.0")
- **example**: Specifying `output_name = "frigate-{quant}-{input_basename}-{soc}-v{tk_version}"` could result in a model called `frigate-i8-my_model-rk3588-v2.3.0.rknn`.
- `config`: Configuration passed to `rknn-toolkit2` for model conversion. For an explanation of all available parameters have a look at section "2.2. Model configuration" of [this manual](https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.3.0/03_Rockchip_RKNPU_API_Reference_RKNN_Toolkit2_V2.3.0_EN.pdf).
- `config`: Configuration passed to `rknn-toolkit2` for model conversion. For an explanation of all available parameters have a look at section "2.2. Model configuration" of [this manual](https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.3.0/03_Rockchip_RKNPU_API_Reference_RKNN_Toolkit2_V2.3.0_EN.pdf).
# Models
Some model types are not included in Frigate by default.
## Downloading Models
Here are some tips for getting different model types
### Downloading YOLO-NAS Model
You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
:::warning
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
:::
The input image size in this notebook is set to 320x320. This results in lower CPU usage and faster inference times without impacting performance in most cases due to the way Frigate crops video frames to areas of interest before running detection. The notebook and config can be updated to 640x640 if desired.

View File

@ -9,6 +9,7 @@ from typing_extensions import Literal
from frigate.detectors.detection_api import DetectionApi
from frigate.detectors.detector_config import BaseDetectorConfig, ModelTypeEnum
from frigate.util.model import post_process_yolov9
logger = logging.getLogger(__name__)
@ -22,7 +23,12 @@ class OvDetectorConfig(BaseDetectorConfig):
class OvDetector(DetectionApi):
type_key = DETECTOR_KEY
supported_models = [ModelTypeEnum.ssd, ModelTypeEnum.yolonas, ModelTypeEnum.yolox]
supported_models = [
ModelTypeEnum.ssd,
ModelTypeEnum.yolonas,
ModelTypeEnum.yolov9,
ModelTypeEnum.yolox,
]
def __init__(self, detector_config: OvDetectorConfig):
self.ov_core = ov.Core()
@ -160,8 +166,7 @@ class OvDetector(DetectionApi):
if self.model_invalid:
return detections
if self.ov_model_type == ModelTypeEnum.ssd:
elif self.ov_model_type == ModelTypeEnum.ssd:
results = infer_request.get_output_tensor(0).data[0][0]
for i, (_, class_id, score, xmin, ymin, xmax, ymax) in enumerate(results):
@ -176,8 +181,7 @@ class OvDetector(DetectionApi):
xmax,
]
return detections
if self.ov_model_type == ModelTypeEnum.yolonas:
elif self.ov_model_type == ModelTypeEnum.yolonas:
predictions = infer_request.get_output_tensor(0).data
for i, prediction in enumerate(predictions):
@ -196,8 +200,10 @@ class OvDetector(DetectionApi):
x_max / self.w,
]
return detections
if self.ov_model_type == ModelTypeEnum.yolox:
elif self.ov_model_type == ModelTypeEnum.yolov9:
out_tensor = infer_request.get_output_tensor(0).data
return post_process_yolov9(out_tensor, self.w, self.h)
elif self.ov_model_type == ModelTypeEnum.yolox:
out_tensor = infer_request.get_output_tensor()
# [x, y, h, w, box_score, class_no_1, ..., class_no_80],
results = out_tensor.data