More inference speed updates (#19947)

* More inference speed updates

* Update hardware.md

* Update hardware.md

* Update index.md

* More inference speeds

* Update home-assistant.md

* Update object_detectors.md

* Update first_model.md
This commit is contained in:
Nicolas Mowen 2025-09-08 06:43:04 -06:00 committed by GitHub
parent 751de141d5
commit c5ed95ec52
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
5 changed files with 42 additions and 13 deletions

View File

@ -1040,7 +1040,7 @@ COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /yolov9 WORKDIR /yolov9
ADD https://github.com/WongKinYiu/yolov9.git . ADD https://github.com/WongKinYiu/yolov9.git .
RUN uv pip install --system -r requirements.txt RUN uv pip install --system -r requirements.txt
RUN uv pip install --system onnx onnxruntime onnx-simplifier>=0.4.1 RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier>=0.4.1
ARG MODEL_SIZE ARG MODEL_SIZE
ADD https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-${MODEL_SIZE}-converted.pt yolov9-${MODEL_SIZE}.pt ADD https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-${MODEL_SIZE}-converted.pt yolov9-${MODEL_SIZE}.pt
RUN sed -i "s/ckpt = torch.load(attempt_download(w), map_location='cpu')/ckpt = torch.load(attempt_download(w), map_location='cpu', weights_only=False)/g" models/experimental.py RUN sed -i "s/ckpt = torch.load(attempt_download(w), map_location='cpu')/ckpt = torch.load(attempt_download(w), map_location='cpu', weights_only=False)/g" models/experimental.py

View File

@ -133,14 +133,16 @@ More information is available [in the detector docs](/configuration/object_detec
Inference speeds vary greatly depending on the CPU or GPU used, some known examples of GPU inference times are below: Inference speeds vary greatly depending on the CPU or GPU used, some known examples of GPU inference times are below:
| Name | MobileNetV2 Inference Time | YOLOv9 | YOLO-NAS Inference Time | RF-DETR Inference Time | Notes | | Name | MobileNetV2 Inference Time | YOLOv9 | YOLO-NAS Inference Time | RF-DETR Inference Time | Notes |
| -------------- | -------------------------- | -------------------------- | ------------------------- | ---------------------- | ---------------------------------- | | -------------- | -------------------------- | --------------------------------------- | ------------------------- | ---------------------- | ---------------------------------- |
| Intel HD 530 | 15 - 35 ms | | | | Can only run one detector instance | | Intel HD 530 | 15 - 35 ms | | | | Can only run one detector instance |
| Intel HD 620 | 15 - 25 ms | | 320: ~ 35 ms | | | | Intel HD 620 | 15 - 25 ms | | 320: ~ 35 ms | | |
| Intel HD 630 | ~ 15 ms | | 320: ~ 30 ms | | | | Intel HD 630 | ~ 15 ms | | 320: ~ 30 ms | | |
| Intel UHD 730 | ~ 10 ms | | 320: ~ 19 ms 640: ~ 54 ms | | | | Intel UHD 730 | ~ 10 ms | | 320: ~ 19 ms 640: ~ 54 ms | | |
| Intel UHD 770 | ~ 15 ms | t-320: 24: ms s-320: 30 ms | 320: ~ 20 ms 640: ~ 46 ms | | | | Intel UHD 770 | ~ 15 ms | t-320: 24 ms s-320: 30 ms s-640: 45 ms | 320: ~ 20 ms 640: ~ 46 ms | | |
| Intel N100 | ~ 15 ms | s-320: 30 ms | 320: ~ 25 ms | | Can only run one detector instance | | Intel N100 | ~ 15 ms | s-320: 30 ms | 320: ~ 25 ms | | Can only run one detector instance |
| Intel Iris XE | ~ 10 ms | | 320: ~ 18 ms 640: ~ 50 ms | | | | Intel N150 | ~ 15 ms | t-320: 16ms s-320: 24 ms | | | |
| Intel Iris XE | ~ 10 ms | s-320: 12 ms s-640: 30 ms | 320: ~ 18 ms 640: ~ 50 ms | | |
| Intel Arc A310 | | s-320: 9 ms | | | |
| Intel Arc A380 | ~ 6 ms | | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | | | Intel Arc A380 | ~ 6 ms | | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
| Intel Arc A750 | ~ 4 ms | | 320: ~ 8 ms | | | | Intel Arc A750 | ~ 4 ms | | 320: ~ 8 ms | | |
@ -169,6 +171,7 @@ Inference speeds will vary greatly depending on the GPU and the model used.
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time | | Name | YOLOv9 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time |
| --------------- | --------------------- | ------------------------- | ---------------------- | | --------------- | --------------------- | ------------------------- | ---------------------- |
| GTX 1070 | s-320: 16 ms | 320: 14 ms | |
| RTX 3050 | t-320: 15 ms | 320: ~ 10 ms 640: ~ 16 ms | Nano-320: ~ 12 ms | | RTX 3050 | t-320: 15 ms | 320: ~ 10 ms 640: ~ 16 ms | Nano-320: ~ 12 ms |
| RTX 3070 | t-320: 11 ms | 320: ~ 8 ms 640: ~ 14 ms | Nano-320: ~ 9 ms | | RTX 3070 | t-320: 11 ms | 320: ~ 8 ms 640: ~ 14 ms | Nano-320: ~ 9 ms |
| RTX A4000 | | 320: ~ 15 ms | | | RTX A4000 | | 320: ~ 15 ms | |

View File

@ -185,6 +185,26 @@ For clips to be castable to media devices, audio is required and may need to be
<a name="api"></a> <a name="api"></a>
## Camera API
To disable a camera dynamically
```
action: camera.turn_off
data: {}
target:
entity_id: camera.back_deck_cam # your Frigate camera entity ID
```
To enable a camera that has been disabled dynamically
```
action: camera.turn_on
data: {}
target:
entity_id: camera.back_deck_cam # your Frigate camera entity ID
```
## Notification API ## Notification API
Many people do not want to expose Frigate to the web, so the integration creates some public API endpoints that can be used for notifications. Many people do not want to expose Frigate to the web, so the integration creates some public API endpoints that can be used for notifications.

View File

@ -34,6 +34,12 @@ Model IDs are not secret values and can be shared freely. Access to your model i
::: :::
:::tip
When setting the plus model id, all other fields should be removed as these are configured automatically with the Frigate+ model config
:::
## Step 4: Adjust your object filters for higher scores ## Step 4: Adjust your object filters for higher scores
Frigate+ models generally have much higher scores than the default model provided in Frigate. You will likely need to increase your `threshold` and `min_score` values. Here is an example of how these values can be refined, but you should expect these to evolve as your model improves. For more information about how `threshold` and `min_score` are related, see the docs on [object filters](../configuration/object_filters.md#object-scores). Frigate+ models generally have much higher scores than the default model provided in Frigate. You will likely need to increase your `threshold` and `min_score` values. Here is an example of how these values can be refined, but you should expect these to evolve as your model improves. For more information about how `threshold` and `min_score` are related, see the docs on [object filters](../configuration/object_filters.md#object-scores).

View File

@ -51,7 +51,7 @@ Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVi
| [Coral (all form factors)](/configuration/object_detectors.md#edge-tpu-detector) | `edgetpu` | `mobiledet` | | [Coral (all form factors)](/configuration/object_detectors.md#edge-tpu-detector) | `edgetpu` | `mobiledet` |
| [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolov9` | | [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolov9` |
| [NVidia GPU](/configuration/object_detectors#onnx) | `onnx` | `yolov9` | | [NVidia GPU](/configuration/object_detectors#onnx) | `onnx` | `yolov9` |
| [AMD ROCm GPU](/configuration/object_detectors#amdrocm-gpu-detector) | `rocm` | `yolov9` | | [AMD ROCm GPU](/configuration/object_detectors#amdrocm-gpu-detector) | `onnx` | `yolov9` |
| [Hailo8/Hailo8L/Hailo8R](/configuration/object_detectors#hailo-8) | `hailo8l` | `yolov9` | | [Hailo8/Hailo8L/Hailo8R](/configuration/object_detectors#hailo-8) | `hailo8l` | `yolov9` |
| [Rockchip NPU](/configuration/object_detectors#rockchip-platform)\* | `rknn` | `yolov9` | | [Rockchip NPU](/configuration/object_detectors#rockchip-platform)\* | `rknn` | `yolov9` |