Merge remote-tracking branch 'origin/master' into dev

This commit is contained in:
Blake Blackshear 2024-01-26 07:09:28 -06:00
commit 61d285ba13
3 changed files with 13 additions and 11 deletions

View File

@ -76,8 +76,8 @@ Output args presets help make the config more readable and handle use cases for
| Preset | Usage | Other Notes |
| -------------------------------- | --------------------------------- | --------------------------------------------- |
| preset-record-generic | Record WITHOUT audio | This is the default when nothing is specified |
| preset-record-generic-audio-aac | Record WITH aac audio | Use this to enable audio in recordings |
| preset-record-generic-audio-copy | Record WITH original audio | Use this to enable audio in recordings |
| preset-record-generic-audio-aac | Record WITH transcoded aac audio | Use this to transcode to aac audio. If your source is already aac, use preset-record-generic-audio-copy instead to avoid re-encoding |
| preset-record-mjpeg | Record an mjpeg stream | Recommend restreaming mjpeg stream instead |
| preset-record-jpeg | Record live jpeg | Recommend restreaming live jpeg instead |
| preset-record-ubiquiti | Record ubiquiti stream with audio | Recordings with ubiquiti non-standard audio |

View File

@ -176,7 +176,7 @@ volumes:
## NVidia TensorRT Detector
NVidia GPUs may be used for object detection using the TensorRT libraries. Due to the size of the additional libraries, this detector is only provided in images with the `-tensorrt` tag suffix. This detector is designed to work with Yolo models for object detection.
NVidia GPUs may be used for object detection using the TensorRT libraries. Due to the size of the additional libraries, this detector is only provided in images with the `-tensorrt` tag suffix, e.g. `ghcr.io/blakeblackshear/frigate:stable-tensorrt`. This detector is designed to work with Yolo models for object detection.
### Minimum Hardware Support
@ -198,7 +198,7 @@ There are improved capabilities in newer GPU architectures that TensorRT can ben
The model used for TensorRT must be preprocessed on the same hardware platform that they will run on. This means that each user must run additional setup to generate a model file for the TensorRT library. A script is included that will build several common models.
The Frigate image will generate model files during startup if the specified model is not found. Processed models are stored in the `/config/model_cache` folder. Typically the `/config` path is mapped to a directory on the host already and the `model_cache` does not need to be mapped separately unless the user wants to store it in a different location on the host.
The Frigate image will generate model files during startup if the specified model is not found. Processed models are stored in the `/config/model_cache` folder. Typically the `/config` path is mapped to a directory on the host already and the `model_cache` does not need to be mapped separately unless the user wants to store it in a different location on the host.
By default, the `yolov7-320` model will be generated, but this can be overridden by specifying the `YOLO_MODELS` environment variable in Docker. One or more models may be listed in a comma-separated format, and each one will be generated. To select no model generation, set the variable to an empty string, `YOLO_MODELS=""`. Models will only be generated if the corresponding `{model}.trt` file is not present in the `model_cache` folder, so you can force a model to be regenerated by deleting it from your Frigate data folder.
@ -245,7 +245,7 @@ frigate:
- USE_FP16=false
```
If you have multiple GPUs passed through to Frigate, you can specify which one to use for the model conversion. The conversion script will use the first visible GPU, however in systems with mixed GPU models you may not want to use the default index for object detection. Add the `TRT_MODEL_PREP_DEVICE` environment variable to select a specific GPU.
If you have multiple GPUs passed through to Frigate, you can specify which one to use for the model conversion. The conversion script will use the first visible GPU, however in systems with mixed GPU models you may not want to use the default index for object detection. Add the `TRT_MODEL_PREP_DEVICE` environment variable to select a specific GPU.
```yml
frigate:
@ -295,12 +295,12 @@ Replace `<your_codeproject_ai_server_ip>` and `<port>` with the IP address and p
To verify that the integration is working correctly, start Frigate and observe the logs for any error messages related to CodeProject.AI. Additionally, you can check the Frigate web interface to see if the objects detected by CodeProject.AI are being displayed and tracked properly.
# Community Supported Detectors
## Rockchip RKNN-Toolkit-Lite2
This detector is only available if one of the following Rockchip SoCs is used:
- RK3588/RK3588S
- RK3568
- RK3566
@ -317,13 +317,13 @@ Use a frigate docker image with `-rk` suffix and enable privileged mode by addin
This `config.yml` shows all relevant options to configure the detector and explains them. All values shown are the default values (except for one). Lines that are required at least to use the detector are labeled as required, all other lines are optional.
```yaml
detectors: # required
rknn: # required
type: rknn # required
detectors: # required
rknn: # required
type: rknn # required
# core mask for npu
core_mask: 0
model: # required
model: # required
# name of yolov8 model or path to your own .rknn model file
# possible values are:
# - default-yolov8n
@ -338,12 +338,13 @@ model: # required
height: 320
# pixel format of detection frame
# default value is rgb but yolov models usually use bgr format
input_pixel_format: bgr # required
input_pixel_format: bgr # required
# shape of detection frame
input_tensor: nhwc
```
Explanation for rknn specific options:
- **core mask** controls which cores of your NPU should be used. This option applies only to SoCs with a multicore NPU (at the time of writing this in only the RK3588/S). The easiest way is to pass the value as a binary number. To do so, use the prefix `0b` and write a `0` to disable a core and a `1` to enable a core, whereas the last digit coresponds to core0, the second last to core1, etc. You also have to use the cores in ascending order (so you can't use core0 and core2; but you can use core0 and core1). Enabling more cores can reduce the inference speed, especially when using bigger models (see section below). Examples:
- `core_mask: 0b000` or just `core_mask: 0` let the NPU decide which cores should be used. Default and recommended value.
- `core_mask: 0b001` use only core0.
@ -378,6 +379,7 @@ $ cat /sys/kernel/debug/rknpu/load
- If the model does not exist, it will be automatically downloaded to `/config/model_cache/rknn`.
- If your server has no internet connection, you can download the model from [this Github repository](https://github.com/MarcA711/rknn-models/releases) using another device and place it in the `config/model_cache/rknn` on your system.
- Finally, you can also provide your own model. Note that only yolov8 models are currently supported. Moreover, you will need to convert your model to the rknn format using `rknn-toolkit2` on a x86 machine. Afterwards, you can place your `.rknn` model file in the `config/model_cache/rknn` directory on your system. Then you need to pass the path to your model using the `path` option of your `model` block like this:
```yaml
model:
path: /config/model_cache/rknn/my-rknn-model.rknn

View File

@ -331,5 +331,5 @@ By default, Frigate will retain snapshots of all events for 10 days. The full se
Now that you have a working install, you can use the following guides for additional features:
1. [Configuring go2rtc](configuring_go2rtc) - Additional live view options and RTSP relay
1. [Configuring go2rtc](configuring_go2rtc.md) - Additional live view options and RTSP relay
2. [Home Assistant Integration](../integrations/home-assistant.md) - Integrate with Home Assistant