Docs updates (#17986)

* docs updates

* revamp hwaccel

* remove

* clarify

* fix

* more clarity

* clean up
This commit is contained in:
Josh Hawkins 2025-05-01 08:17:35 -05:00 committed by GitHub
parent 2c9cd760a9
commit 08c087f221
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
13 changed files with 84 additions and 62 deletions

View File

@ -26,7 +26,7 @@ In both cases, a lightweight face landmark detection model is also used to align
The `small` model is optimized for efficiency and runs on the CPU, most CPUs should run the model efficiently.
The `large` model is optimized for accuracy, an integrated or discrete GPU is highly recommended.
The `large` model is optimized for accuracy, an integrated or discrete GPU is highly recommended. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation.
## Configuration
@ -133,6 +133,7 @@ No, using another face recognition service will interfere with Frigate's built i
### Does face recognition run on the recording stream?
Face recognition does not run on the recording stream, this would be suboptimal for many reasons:
1. The latency of accessing the recordings means the notifications would not include the names of recognized people because recognition would not complete until after.
2. The embedding models used run on a set image size, so larger images will be scaled down to match this anyway.
3. Motion clarity is much more important than extra pixels, over-compression and motion blur are much more detrimental to results than resolution.

View File

@ -9,7 +9,7 @@ Some presets of FFmpeg args are provided by default to make the configuration ea
It is highly recommended to use hwaccel presets in the config. These presets not only replace the longer args, but they also give Frigate hints of what hardware is available and allows Frigate to make other optimizations using the GPU such as when encoding the birdseye restream or when scaling a stream that has a size different than the native stream size.
See [the hwaccel docs](/configuration/hardware_acceleration.md) for more info on how to setup hwaccel for your GPU / iGPU.
See [the hwaccel docs](/configuration/hardware_acceleration_video.md) for more info on how to setup hwaccel for your GPU / iGPU.
| Preset | Usage | Other Notes |
| --------------------- | ------------------------------ | ----------------------------------------------------- |

View File

@ -0,0 +1,26 @@
---
id: hardware_acceleration_enrichments
title: Enrichments
---
# Enrichments
Some of Frigate's enrichments can use a discrete GPU for accelerated processing.
## Requirements
Object detection and enrichments (like Semantic Search, Face Recognition, and License Plate Recognition) are independent features. To use a GPU for object detection, see the [Object Detectors](/configuration/object_detectors.md) documentation. If you want to use your GPU for any supported enrichments, you must choose the appropriate Frigate Docker image for your GPU and configure the enrichment according to its specific documentation.
- **AMD**
- ROCm will automatically be detected and used for enrichments in the `-rocm` Frigate image.
- **Intel**
- OpenVINO will automatically be detected and used for enrichments in the default Frigate image.
- **Nvidia**
- Nvidia GPUs will automatically be detected and used for enrichments in the `-tensorrt` Frigate image.
- Jetson devices will automatically be detected and used for enrichments in the `-tensorrt-jp6` Frigate image.
Utilizing a GPU for enrichments does not require you to use the same GPU for object detection. For example, you can run the `tensorrt` Docker image for enrichments and still use other dedicated hardware for object detection.

View File

@ -1,15 +1,15 @@
---
id: hardware_acceleration
title: Hardware Acceleration
id: hardware_acceleration_video
title: Video Decoding
---
# Hardware Acceleration
# Video Decoding
It is highly recommended to use a GPU for hardware acceleration in Frigate. Some types of hardware acceleration are detected and used automatically, but you may need to update your configuration to enable hardware accelerated decoding in ffmpeg.
It is highly recommended to use a GPU for hardware acceleration video decoding in Frigate. Some types of hardware acceleration are detected and used automatically, but you may need to update your configuration to enable hardware accelerated decoding in ffmpeg.
Depending on your system, these parameters may not be compatible. More information on hardware accelerated decoding for ffmpeg can be found here: https://trac.ffmpeg.org/wiki/HWAccelIntro
# Officially Supported
# Object Detection
## Raspberry Pi 3/4
@ -69,12 +69,12 @@ Or map in all the `/dev/video*` devices.
**Recommended hwaccel Preset**
| CPU Generation | Intel Driver | Recommended Preset | Notes |
| -------------- | ------------ | ------------------ | ----------------------------------- |
| gen1 - gen7 | i965 | preset-vaapi | qsv is not supported |
| gen8 - gen12 | iHD | preset-vaapi | preset-intel-qsv-* can also be used |
| gen13+ | iHD / Xe | preset-intel-qsv-* | |
| Intel Arc GPU | iHD / Xe | preset-intel-qsv-* | |
| CPU Generation | Intel Driver | Recommended Preset | Notes |
| -------------- | ------------ | ------------------- | ------------------------------------ |
| gen1 - gen7 | i965 | preset-vaapi | qsv is not supported |
| gen8 - gen12 | iHD | preset-vaapi | preset-intel-qsv-\* can also be used |
| gen13+ | iHD / Xe | preset-intel-qsv-\* | |
| Intel Arc GPU | iHD / Xe | preset-intel-qsv-\* | |
:::

View File

@ -7,13 +7,14 @@ Frigate can recognize license plates on vehicles and automatically add the detec
LPR works best when the license plate is clearly visible to the camera. For moving vehicles, Frigate continuously refines the recognition process, keeping the most confident result. However, LPR does not run on stationary vehicles.
When a plate is recognized, the recognized name is:
When a plate is recognized, the details are:
- Added as a `sub_label` (if known) or the `recognized_license_plate` field (if unknown) to a tracked object.
- Viewable in the Review Item Details pane in Review (sub labels).
- Viewable in the Tracked Object Details pane in Explore (sub labels and recognized license plates).
- Filterable through the More Filters menu in Explore.
- Published via the `frigate/events` MQTT topic as a `sub_label` (known) or `recognized_license_plate` (unknown) for the `car` tracked object.
- Published via the `frigate/tracked_object_update` MQTT topic with `name` (if known) and `plate`.
## Model Requirements
@ -68,10 +69,10 @@ Fine-tune the LPR feature using these optional parameters at the global level of
- Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant plates.
- **`device`**: Device to use to run license plate recognition models.
- Default: `CPU`
- This can be `CPU` or `GPU`. For users without a model that detects license plates natively, using a GPU may increase performance of the models, especially the YOLOv9 license plate detector model.
- This can be `CPU` or `GPU`. For users without a model that detects license plates natively, using a GPU may increase performance of the models, especially the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation.
- **`model_size`**: The size of the model used to detect text on plates.
- Default: `small`
- This can be `small` or `large`. The `large` model uses an enhanced text detector and is more accurate at finding text on plates but slower than the `small` model. For most users, the small model is recommended. For users in countries with multiple lines of text on plates, the large model is recommended. Note that using the large does not improve _text recognition_, but it may improve _text detection_.
- This can be `small` or `large`. The `large` model uses an enhanced text detector and is more accurate at finding text on plates but slower than the `small` model. For most users, the small model is recommended. For users in countries with multiple lines of text on plates, the large model is recommended. Note that using the large model does not improve _text recognition_, but it may improve _text detection_.
### Recognition
@ -184,7 +185,7 @@ cameras:
ffmpeg: ... # add your streams
detect:
enabled: True
fps: 5 # increase to 10 if vehicles move quickly across your frame. Higher than 15 is unnecessary and is not recommended.
fps: 5 # increase to 10 if vehicles move quickly across your frame. Higher than 10 is unnecessary and is not recommended.
min_initialized: 2
width: 1920
height: 1080
@ -228,7 +229,7 @@ An example configuration for a dedicated LPR camera using the secondary pipeline
# LPR global configuration
lpr:
enabled: True
device: CPU # can also be GPU if available
device: CPU # can also be GPU if available and correct Docker image is used
detection_threshold: 0.7 # change if necessary
# Dedicated LPR camera configuration

View File

@ -484,7 +484,7 @@ frigate:
### Configuration Parameters
The TensorRT detector can be selected by specifying `tensorrt` as the model type. The GPU will need to be passed through to the docker container using the same methods described in the [Hardware Acceleration](hardware_acceleration.md#nvidia-gpus) section. If you pass through multiple GPUs, you can select which GPU is used for a detector with the `device` configuration parameter. The `device` parameter is an integer value of the GPU index, as shown by `nvidia-smi` within the container.
The TensorRT detector can be selected by specifying `tensorrt` as the model type. The GPU will need to be passed through to the docker container using the same methods described in the [Hardware Acceleration](hardware_acceleration_video.md#nvidia-gpus) section. If you pass through multiple GPUs, you can select which GPU is used for a detector with the `device` configuration parameter. The `device` parameter is an integer value of the GPU index, as shown by `nvidia-smi` within the container.
The TensorRT detector uses `.trt` model files that are located in `/config/model_cache/tensorrt` by default. These model path and dimensions used will depend on which model you have generated.
@ -610,7 +610,7 @@ If the correct build is used for your GPU then the GPU will be detected and used
- **Nvidia**
- Nvidia GPUs will automatically be detected and used with the ONNX detector in the `-tensorrt` Frigate image.
- Jetson devices will automatically be detected and used with the ONNX detector in the `-tensorrt-jp(4/5)` Frigate image.
- Jetson devices will automatically be detected and used with the ONNX detector in the `-tensorrt-jp6` Frigate image.
:::
@ -659,7 +659,7 @@ YOLOv3, YOLOv4, YOLOv7, and [YOLOv9](https://github.com/WongKinYiu/yolov9) model
:::tip
The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv9 models, but may support other YOLO model architectures as well. See [the models section](#downloading-yolo-models) for more information on downloading YOLO models for use in Frigate.
The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv9 models, but may support other YOLO model architectures as well. See [the models section](#downloading-yolo-models) for more information on downloading YOLO models for use in Frigate.
:::

View File

@ -90,19 +90,7 @@ semantic_search:
If the correct build is used for your GPU and the `large` model is configured, then the GPU will be detected and used automatically.
**NOTE:** Object detection and Semantic Search are independent features. If you want to use your GPU with Semantic Search, you must choose the appropriate Frigate Docker image for your GPU.
- **AMD**
- ROCm will automatically be detected and used for Semantic Search in the `-rocm` Frigate image.
- **Intel**
- OpenVINO will automatically be detected and used for Semantic Search in the default Frigate image.
- **Nvidia**
- Nvidia GPUs will automatically be detected and used for Semantic Search in the `-tensorrt` Frigate image.
- Jetson devices will automatically be detected and used for Semantic Search in the `-tensorrt-jp(4/5)` Frigate image.
See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation.
:::

View File

@ -91,4 +91,4 @@ The `CODEOWNERS` file should be updated to include the `docker/board` along with
# Docs
At a minimum the `installation`, `object_detectors`, `hardware_acceleration`, and `ffmpeg-presets` docs should be updated (if applicable) to reflect the configuration of this community board.
At a minimum the `installation`, `object_detectors`, `hardware_acceleration_video`, and `ffmpeg-presets` docs should be updated (if applicable) to reflect the configuration of this community board.

View File

@ -38,6 +38,7 @@ Frigate supports multiple different detectors that work on different types of ha
**Most Hardware**
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices offering a wide range of compatibility with devices.
- [Supports many model architectures](../../configuration/object_detectors#configuration)
- Runs best with tiny or small size models
@ -73,10 +74,10 @@ Frigate supports multiple different detectors that work on different types of ha
### Hailo-8
Frigate supports both the Hailo-8 and Hailo-8L AI Acceleration Modules on compatible hardware platforms—including the Raspberry Pi 5 with the PCIe hat from the AI kit. The Hailo detector integration in Frigate automatically identifies your hardware type and selects the appropriate default model when a custom model isnt provided.
**Default Model Configuration:**
- **Hailo-8L:** Default model is **YOLOv6n**.
- **Hailo-8:** Default model is **YOLOv6n**.
@ -90,6 +91,7 @@ In real-world deployments, even with multiple cameras running concurrently, Frig
### Google Coral TPU
Frigate supports both the USB and M.2 versions of the Google Coral.
- The USB version is compatible with the widest variety of hardware and does not require a driver on the host machine. However, it does lack the automatic throttling features of the other versions.
- The PCIe and M.2 versions require installation of a driver on the host. Follow the instructions for your version from https://coral.ai
@ -107,17 +109,17 @@ More information is available [in the detector docs](/configuration/object_detec
Inference speeds vary greatly depending on the CPU or GPU used, some known examples of GPU inference times are below:
| Name | MobileNetV2 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time | Notes |
| -------------------- | -------------------------- | ------------------------- | ------------------------- | -------------------------------------- |
| Intel HD 530 | 15 - 35 ms | | | Can only run one detector instance |
| Intel HD 620 | 15 - 25 ms | 320: ~ 35 ms | | |
| Intel HD 630 | ~ 15 ms | 320: ~ 30 ms | | |
| Intel UHD 730 | ~ 10 ms | 320: ~ 19 ms 640: ~ 54 ms | | |
| Intel UHD 770 | ~ 15 ms | 320: ~ 20 ms 640: ~ 46 ms | | |
| Intel N100 | ~ 15 ms | 320: ~ 20 ms | | |
| Intel Iris XE | ~ 10 ms | 320: ~ 18 ms 640: ~ 50 ms | | |
| Intel Arc A380 | ~ 6 ms | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
| Intel Arc A750 | ~ 4 ms | 320: ~ 8 ms | | |
| Name | MobileNetV2 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time | Notes |
| -------------- | -------------------------- | ------------------------- | ---------------------- | ---------------------------------- |
| Intel HD 530 | 15 - 35 ms | | | Can only run one detector instance |
| Intel HD 620 | 15 - 25 ms | 320: ~ 35 ms | | |
| Intel HD 630 | ~ 15 ms | 320: ~ 30 ms | | |
| Intel UHD 730 | ~ 10 ms | 320: ~ 19 ms 640: ~ 54 ms | | |
| Intel UHD 770 | ~ 15 ms | 320: ~ 20 ms 640: ~ 46 ms | | |
| Intel N100 | ~ 15 ms | 320: ~ 20 ms | | |
| Intel Iris XE | ~ 10 ms | 320: ~ 18 ms 640: ~ 50 ms | | |
| Intel Arc A380 | ~ 6 ms | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
| Intel Arc A750 | ~ 4 ms | 320: ~ 8 ms | | |
### TensorRT - Nvidia GPU
@ -141,15 +143,15 @@ Inference speeds will vary greatly depending on the GPU and the model used.
With the [rocm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many discrete AMD GPUs.
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time |
| --------------- | --------------------- | ------------------------- |
| AMD 780M | ~ 14 ms | 320: ~ 30 ms 640: ~ 60 ms |
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time |
| -------- | --------------------- | ------------------------- |
| AMD 780M | ~ 14 ms | 320: ~ 30 ms 640: ~ 60 ms |
## Community Supported Detectors
### Nvidia Jetson
Frigate supports all Jetson boards, from the inexpensive Jetson Nano to the powerful Jetson Orin AGX. It will [make use of the Jetson's hardware media engine](/configuration/hardware_acceleration#nvidia-jetson-orin-agx-orin-nx-orin-nano-xavier-agx-xavier-nx-tx2-tx1-nano) when configured with the [appropriate presets](/configuration/ffmpeg_presets#hwaccel-presets), and will make use of the Jetson's GPU and DLA for object detection when configured with the [TensorRT detector](/configuration/object_detectors#nvidia-tensorrt-detector).
Frigate supports all Jetson boards, from the inexpensive Jetson Nano to the powerful Jetson Orin AGX. It will [make use of the Jetson's hardware media engine](/configuration/hardware_acceleration_video#nvidia-jetson-orin-agx-orin-nx-orin-nano-xavier-agx-xavier-nx-tx2-tx1-nano) when configured with the [appropriate presets](/configuration/ffmpeg_presets#hwaccel-presets), and will make use of the Jetson's GPU and DLA for object detection when configured with the [TensorRT detector](/configuration/object_detectors#nvidia-tensorrt-detector).
Inference speed will vary depending on the YOLO model, jetson platform and jetson nvpmodel (GPU/DLA/EMC clock speed). It is typically 20-40 ms for most models. The DLA is more efficient than the GPU, but not faster, so using the DLA will reduce power consumption but will slightly increase inference time.
@ -163,11 +165,10 @@ Frigate supports hardware video processing on all Rockchip boards. However, hard
- RK3576
- RK3588
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time | YOLOx Inference Time |
| --------------- | --------------------- | --------------------------- | ------------------------- |
| rk3588 3 cores | tiny: ~ 35 ms | small: ~ 20 ms med: ~ 30 ms | nano: 14 ms tiny: 18 ms |
| rk3566 1 core | | small: ~ 96 ms | |
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time | YOLOx Inference Time |
| -------------- | --------------------- | --------------------------- | ----------------------- |
| rk3588 3 cores | tiny: ~ 35 ms | small: ~ 20 ms med: ~ 30 ms | nano: 14 ms tiny: 18 ms |
| rk3566 1 core | | small: ~ 96 ms | |
The inference time of a rk3588 with all 3 cores enabled is typically 25-30 ms for yolo-nas s.

View File

@ -183,7 +183,7 @@ or add these options to your `docker run` command:
#### Configuration
Next, you should configure [hardware object detection](/configuration/object_detectors#rockchip-platform) and [hardware video processing](/configuration/hardware_acceleration#rockchip-platform).
Next, you should configure [hardware object detection](/configuration/object_detectors#rockchip-platform) and [hardware video processing](/configuration/hardware_acceleration_video#rockchip-platform).
## Docker
@ -316,7 +316,8 @@ If you choose to run Frigate via LXC in Proxmox the setup can be complex so be p
:::
Suggestions include:
Suggestions include:
- For Intel-based hardware acceleration, to allow access to the `/dev/dri/renderD128` device with major number 226 and minor number 128, add the following lines to the `/etc/pve/lxc/<id>.conf` LXC configuration:
- `lxc.cgroup2.devices.allow: c 226:128 rwm`
- `lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file`
@ -407,7 +408,7 @@ mkdir -p /share/share_vol2/frigate/media
# Also replace the time zone value for 'TZ' in the sample command.
# Example command will create a docker container that uses at most 2 CPUs and 4G RAM.
# You may need to add "--env=LIBVA_DRIVER_NAME=i965 \" to the following docker run command if you
# have certain CPU (e.g., J4125). See https://docs.frigate.video/configuration/hardware_acceleration.
# have certain CPU (e.g., J4125). See https://docs.frigate.video/configuration/hardware_acceleration_video.
docker run \
--name=frigate \
--shm-size=256m \

View File

@ -162,7 +162,7 @@ FFmpeg arguments for other types of cameras can be found [here](../configuration
### Step 3: Configure hardware acceleration (recommended)
Now that you have a working camera configuration, you want to setup hardware acceleration to minimize the CPU required to decode your video streams. See the [hardware acceleration](../configuration/hardware_acceleration.md) config reference for examples applicable to your hardware.
Now that you have a working camera configuration, you want to setup hardware acceleration to minimize the CPU required to decode your video streams. See the [hardware acceleration](../configuration/hardware_acceleration_video.md) config reference for examples applicable to your hardware.
Here is an example configuration with hardware acceleration configured to work with most Intel processors with an integrated GPU using the [preset](../configuration/ffmpeg_presets.md):
@ -303,6 +303,7 @@ By default, Frigate will retain video of all tracked objects for 10 days. The fu
### Step 7: Complete config
At this point you have a complete config with basic functionality.
- View [common configuration examples](../configuration/index.md#common-configuration-examples) for a list of common configuration examples.
- View [full config reference](../configuration/reference.md) for a complete list of configuration options.

View File

@ -34,7 +34,7 @@ Frigate generally [recommends cameras with configurable sub streams](/frigate/ha
To do this efficiently the following setup is required:
1. A GPU or iGPU must be available to do the scaling.
2. [ffmpeg presets for hwaccel](/configuration/hardware_acceleration.md) must be used
2. [ffmpeg presets for hwaccel](/configuration/hardware_acceleration_video.md) must be used
3. Set the desired detection resolution for `detect -> width` and `detect -> height`.
When this is done correctly, the GPU will do the decoding and scaling which will result in a small increase in CPU usage but with better results.

View File

@ -59,10 +59,13 @@ const sidebars: SidebarsConfig = {
"configuration/objects",
"configuration/stationary_objects",
],
"Hardware Acceleration": [
"configuration/hardware_acceleration_video",
"configuration/hardware_acceleration_enrichments",
],
"Extra Configuration": [
"configuration/authentication",
"configuration/notifications",
"configuration/hardware_acceleration",
"configuration/ffmpeg_presets",
"configuration/pwa",
"configuration/tls",