Merge remote-tracking branch 'origin/master' into dev

This commit is contained in:
Blake Blackshear 2024-01-26 07:09:28 -06:00
commit 61d285ba13
3 changed files with 13 additions and 11 deletions

View File

@ -76,8 +76,8 @@ Output args presets help make the config more readable and handle use cases for
| Preset | Usage | Other Notes | | Preset | Usage | Other Notes |
| -------------------------------- | --------------------------------- | --------------------------------------------- | | -------------------------------- | --------------------------------- | --------------------------------------------- |
| preset-record-generic | Record WITHOUT audio | This is the default when nothing is specified | | preset-record-generic | Record WITHOUT audio | This is the default when nothing is specified |
| preset-record-generic-audio-aac | Record WITH aac audio | Use this to enable audio in recordings |
| preset-record-generic-audio-copy | Record WITH original audio | Use this to enable audio in recordings | | preset-record-generic-audio-copy | Record WITH original audio | Use this to enable audio in recordings |
| preset-record-generic-audio-aac | Record WITH transcoded aac audio | Use this to transcode to aac audio. If your source is already aac, use preset-record-generic-audio-copy instead to avoid re-encoding |
| preset-record-mjpeg | Record an mjpeg stream | Recommend restreaming mjpeg stream instead | | preset-record-mjpeg | Record an mjpeg stream | Recommend restreaming mjpeg stream instead |
| preset-record-jpeg | Record live jpeg | Recommend restreaming live jpeg instead | | preset-record-jpeg | Record live jpeg | Recommend restreaming live jpeg instead |
| preset-record-ubiquiti | Record ubiquiti stream with audio | Recordings with ubiquiti non-standard audio | | preset-record-ubiquiti | Record ubiquiti stream with audio | Recordings with ubiquiti non-standard audio |

View File

@ -176,7 +176,7 @@ volumes:
## NVidia TensorRT Detector ## NVidia TensorRT Detector
NVidia GPUs may be used for object detection using the TensorRT libraries. Due to the size of the additional libraries, this detector is only provided in images with the `-tensorrt` tag suffix. This detector is designed to work with Yolo models for object detection. NVidia GPUs may be used for object detection using the TensorRT libraries. Due to the size of the additional libraries, this detector is only provided in images with the `-tensorrt` tag suffix, e.g. `ghcr.io/blakeblackshear/frigate:stable-tensorrt`. This detector is designed to work with Yolo models for object detection.
### Minimum Hardware Support ### Minimum Hardware Support
@ -295,12 +295,12 @@ Replace `<your_codeproject_ai_server_ip>` and `<port>` with the IP address and p
To verify that the integration is working correctly, start Frigate and observe the logs for any error messages related to CodeProject.AI. Additionally, you can check the Frigate web interface to see if the objects detected by CodeProject.AI are being displayed and tracked properly. To verify that the integration is working correctly, start Frigate and observe the logs for any error messages related to CodeProject.AI. Additionally, you can check the Frigate web interface to see if the objects detected by CodeProject.AI are being displayed and tracked properly.
# Community Supported Detectors # Community Supported Detectors
## Rockchip RKNN-Toolkit-Lite2 ## Rockchip RKNN-Toolkit-Lite2
This detector is only available if one of the following Rockchip SoCs is used: This detector is only available if one of the following Rockchip SoCs is used:
- RK3588/RK3588S - RK3588/RK3588S
- RK3568 - RK3568
- RK3566 - RK3566
@ -344,6 +344,7 @@ model: # required
``` ```
Explanation for rknn specific options: Explanation for rknn specific options:
- **core mask** controls which cores of your NPU should be used. This option applies only to SoCs with a multicore NPU (at the time of writing this in only the RK3588/S). The easiest way is to pass the value as a binary number. To do so, use the prefix `0b` and write a `0` to disable a core and a `1` to enable a core, whereas the last digit coresponds to core0, the second last to core1, etc. You also have to use the cores in ascending order (so you can't use core0 and core2; but you can use core0 and core1). Enabling more cores can reduce the inference speed, especially when using bigger models (see section below). Examples: - **core mask** controls which cores of your NPU should be used. This option applies only to SoCs with a multicore NPU (at the time of writing this in only the RK3588/S). The easiest way is to pass the value as a binary number. To do so, use the prefix `0b` and write a `0` to disable a core and a `1` to enable a core, whereas the last digit coresponds to core0, the second last to core1, etc. You also have to use the cores in ascending order (so you can't use core0 and core2; but you can use core0 and core1). Enabling more cores can reduce the inference speed, especially when using bigger models (see section below). Examples:
- `core_mask: 0b000` or just `core_mask: 0` let the NPU decide which cores should be used. Default and recommended value. - `core_mask: 0b000` or just `core_mask: 0` let the NPU decide which cores should be used. Default and recommended value.
- `core_mask: 0b001` use only core0. - `core_mask: 0b001` use only core0.
@ -378,6 +379,7 @@ $ cat /sys/kernel/debug/rknpu/load
- If the model does not exist, it will be automatically downloaded to `/config/model_cache/rknn`. - If the model does not exist, it will be automatically downloaded to `/config/model_cache/rknn`.
- If your server has no internet connection, you can download the model from [this Github repository](https://github.com/MarcA711/rknn-models/releases) using another device and place it in the `config/model_cache/rknn` on your system. - If your server has no internet connection, you can download the model from [this Github repository](https://github.com/MarcA711/rknn-models/releases) using another device and place it in the `config/model_cache/rknn` on your system.
- Finally, you can also provide your own model. Note that only yolov8 models are currently supported. Moreover, you will need to convert your model to the rknn format using `rknn-toolkit2` on a x86 machine. Afterwards, you can place your `.rknn` model file in the `config/model_cache/rknn` directory on your system. Then you need to pass the path to your model using the `path` option of your `model` block like this: - Finally, you can also provide your own model. Note that only yolov8 models are currently supported. Moreover, you will need to convert your model to the rknn format using `rknn-toolkit2` on a x86 machine. Afterwards, you can place your `.rknn` model file in the `config/model_cache/rknn` directory on your system. Then you need to pass the path to your model using the `path` option of your `model` block like this:
```yaml ```yaml
model: model:
path: /config/model_cache/rknn/my-rknn-model.rknn path: /config/model_cache/rknn/my-rknn-model.rknn

View File

@ -331,5 +331,5 @@ By default, Frigate will retain snapshots of all events for 10 days. The full se
Now that you have a working install, you can use the following guides for additional features: Now that you have a working install, you can use the following guides for additional features:
1. [Configuring go2rtc](configuring_go2rtc) - Additional live view options and RTSP relay 1. [Configuring go2rtc](configuring_go2rtc.md) - Additional live view options and RTSP relay
2. [Home Assistant Integration](../integrations/home-assistant.md) - Integrate with Home Assistant 2. [Home Assistant Integration](../integrations/home-assistant.md) - Integrate with Home Assistant