mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-07-30 13:48:07 +02:00
Compare commits
57 Commits
Author | SHA1 | Date | |
---|---|---|---|
|
4a94b43e52 | ||
|
3bda638678 | ||
|
687e118b58 | ||
|
95daf0ba05 | ||
|
213dc97c17 | ||
|
f29cf43f52 | ||
|
aabd5b0077 | ||
|
460e291bf1 | ||
|
ee51326d35 | ||
|
948b087d3c | ||
|
77589c18f4 | ||
|
6a62467998 | ||
|
6857cc2b97 | ||
|
37618b0f57 | ||
|
e7f6e069f6 | ||
|
ee4767b1ce | ||
|
6cb5cfb0c9 | ||
|
7cfa818e63 | ||
|
0764fea159 | ||
|
e3ed1ab8ec | ||
|
b01b1faa3f | ||
|
efbc1f836b | ||
|
7c33f9c579 | ||
|
a9255bddb5 | ||
|
6d80a19518 | ||
|
011a2dbfaf | ||
|
9a54c8ca49 | ||
|
cc99330063 | ||
|
7e6a241e03 | ||
|
2d281855fc | ||
|
22cc698b4e | ||
|
5a5a54fc66 | ||
|
6536368467 | ||
|
dc79af2d98 | ||
|
cc955b1e66 | ||
|
da34ff964f | ||
|
d6a2965cb2 | ||
|
4b429e440b | ||
|
8759b4a0d3 | ||
|
df840b7cd5 | ||
|
0645dc70a5 | ||
|
b230b35c62 | ||
|
31da9351f0 | ||
|
93d39370b6 | ||
|
9dc4e8f290 | ||
|
12e62488c6 | ||
|
b5e5127d48 | ||
|
24f4aa79c8 | ||
|
dfc94b5ad6 | ||
|
5acbe37e6f | ||
|
2461d01329 | ||
|
5cafca1be0 | ||
|
9c5a04f25f | ||
|
1ffdd32013 | ||
|
99506845f7 | ||
|
ffd05f90f3 | ||
|
3a8c290f91 |
2
Makefile
2
Makefile
@ -1,7 +1,7 @@
|
||||
default_target: local
|
||||
|
||||
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
|
||||
VERSION = 0.15.0
|
||||
VERSION = 0.15.2
|
||||
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
|
||||
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
|
||||
BOARDS= #Initialized empty
|
||||
|
@ -20,7 +20,7 @@ FIRST_MODEL=true
|
||||
MODEL_DOWNLOAD=""
|
||||
MODEL_CONVERT=""
|
||||
|
||||
if [ -z "$YOLO_MODELS"]; then
|
||||
if [ -z "$YOLO_MODELS" ]; then
|
||||
echo "tensorrt model preparation disabled"
|
||||
exit 0
|
||||
fi
|
||||
|
@ -4,7 +4,9 @@ title: Advanced Options
|
||||
sidebar_label: Advanced Options
|
||||
---
|
||||
|
||||
### `logger`
|
||||
### Logging
|
||||
|
||||
#### Frigate `logger`
|
||||
|
||||
Change the default log level for troubleshooting purposes.
|
||||
|
||||
@ -28,6 +30,18 @@ Examples of available modules are:
|
||||
- `watchdog.<camera_name>`
|
||||
- `ffmpeg.<camera_name>.<sorted_roles>` NOTE: All FFmpeg logs are sent as `error` level.
|
||||
|
||||
#### Go2RTC Logging
|
||||
|
||||
See [the go2rtc docs](for logging configuration)
|
||||
|
||||
```yaml
|
||||
go2rtc:
|
||||
streams:
|
||||
...
|
||||
log:
|
||||
exec: trace
|
||||
```
|
||||
|
||||
### `environment_vars`
|
||||
|
||||
This section can be used to set environment variables for those unable to modify the environment of the container (ie. within HassOS)
|
||||
@ -189,16 +203,16 @@ When frigate starts up, it checks whether your config file is valid, and if it i
|
||||
|
||||
### Via API
|
||||
|
||||
Frigate can accept a new configuration file as JSON at the `/config/save` endpoint. When updating the config this way, Frigate will validate the config before saving it, and return a `400` if the config is not valid.
|
||||
Frigate can accept a new configuration file as JSON at the `/api/config/save` endpoint. When updating the config this way, Frigate will validate the config before saving it, and return a `400` if the config is not valid.
|
||||
|
||||
```bash
|
||||
curl -X POST http://frigate_host:5000/config/save -d @config.json
|
||||
curl -X POST http://frigate_host:5000/api/config/save -d @config.json
|
||||
```
|
||||
|
||||
if you'd like you can use your yaml config directly by using [`yq`](https://github.com/mikefarah/yq) to convert it to json:
|
||||
|
||||
```bash
|
||||
yq r -j config.yml | curl -X POST http://frigate_host:5000/config/save -d @-
|
||||
yq r -j config.yml | curl -X POST http://frigate_host:5000/api/config/save -d @-
|
||||
```
|
||||
|
||||
### Via Command Line
|
||||
|
@ -24,6 +24,11 @@ On startup, an admin user and password are generated and printed in the logs. It
|
||||
|
||||
In the event that you are locked out of your instance, you can tell Frigate to reset the admin password and print it in the logs on next startup using the `reset_admin_password` setting in your config file.
|
||||
|
||||
```yaml
|
||||
auth:
|
||||
reset_admin_password: true
|
||||
```
|
||||
|
||||
## Login failure rate limiting
|
||||
|
||||
In order to limit the risk of brute force attacks, rate limiting is available for login failures. This is implemented with SlowApi, and the string notation for valid values is available in [the documentation](https://limits.readthedocs.io/en/stable/quickstart.html#examples).
|
||||
|
@ -65,6 +65,18 @@ ffmpeg:
|
||||
|
||||
## Model/vendor specific setup
|
||||
|
||||
### Amcrest & Dahua
|
||||
|
||||
Amcrest & Dahua cameras should be connected to via RTSP using the following format:
|
||||
|
||||
```
|
||||
rtsp://USERNAME:PASSWORD@CAMERA-IP/cam/realmonitor?channel=1&subtype=0 # this is the main stream
|
||||
rtsp://USERNAME:PASSWORD@CAMERA-IP/cam/realmonitor?channel=1&subtype=1 # this is the sub stream, typically supporting low resolutions only
|
||||
rtsp://USERNAME:PASSWORD@CAMERA-IP/cam/realmonitor?channel=1&subtype=2 # higher end cameras support a third stream with a mid resolution (1280x720, 1920x1080)
|
||||
rtsp://USERNAME:PASSWORD@CAMERA-IP/cam/realmonitor?channel=1&subtype=3 # new higher end cameras support a fourth stream with another mid resolution (1280x720, 1920x1080)
|
||||
|
||||
```
|
||||
|
||||
### Annke C800
|
||||
|
||||
This camera is H.265 only. To be able to play clips on some devices (like MacOs or iPhone) the H.265 stream has to be repackaged and the audio stream has to be converted to aac. Unfortunately direct playback of in the browser is not working (yet), but the downloaded clip can be played locally.
|
||||
@ -77,7 +89,7 @@ cameras:
|
||||
record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v copy -tag:v hvc1 -bsf:v hevc_mp4toannexb -c:a aac
|
||||
|
||||
inputs:
|
||||
- path: rtsp://user:password@camera-ip:554/H264/ch1/main/av_stream # <----- Update for your camera
|
||||
- path: rtsp://USERNAME:PASSWORD@CAMERA-IP/H264/ch1/main/av_stream # <----- Update for your camera
|
||||
roles:
|
||||
- detect
|
||||
- record
|
||||
@ -95,6 +107,29 @@ ffmpeg:
|
||||
input_args: preset-rtsp-blue-iris
|
||||
```
|
||||
|
||||
### Hikvision Cameras
|
||||
|
||||
Hikvision cameras should be connected to via RTSP using the following format:
|
||||
|
||||
```
|
||||
rtsp://USERNAME:PASSWORD@CAMERA-IP/streaming/channels/101 # this is the main stream
|
||||
rtsp://USERNAME:PASSWORD@CAMERA-IP/streaming/channels/102 # this is the sub stream, typically supporting low resolutions only
|
||||
rtsp://USERNAME:PASSWORD@CAMERA-IP/streaming/channels/103 # higher end cameras support a third stream with a mid resolution (1280x720, 1920x1080)
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
[Some users have reported](https://www.reddit.com/r/frigate_nvr/comments/1hg4ze7/hikvision_security_settings) that newer Hikvision cameras require adjustments to the security settings:
|
||||
|
||||
```
|
||||
RTSP Authentication - digest/basic
|
||||
RTSP Digest Algorithm - MD5
|
||||
WEB Authentication - digest/basic
|
||||
WEB Digest Algorithm - MD5
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
### Reolink Cameras
|
||||
|
||||
Reolink has older cameras (ex: 410 & 520) as well as newer camera (ex: 520a & 511wa) which support different subsets of options. In both cases using the http stream is recommended.
|
||||
@ -196,3 +231,38 @@ ffmpeg:
|
||||
### TP-Link VIGI Cameras
|
||||
|
||||
TP-Link VIGI cameras need some adjustments to the main stream settings on the camera itself to avoid issues. The stream needs to be configured as `H264` with `Smart Coding` set to `off`. Without these settings you may have problems when trying to watch recorded footage. For example Firefox will stop playback after a few seconds and show the following error message: `The media playback was aborted due to a corruption problem or because the media used features your browser did not support.`.
|
||||
|
||||
## USB Cameras (aka Webcams)
|
||||
|
||||
To use a USB camera (webcam) with Frigate, the recommendation is to use go2rtc's [FFmpeg Device](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#source-ffmpeg-device) support:
|
||||
|
||||
- Preparation outside of Frigate:
|
||||
- Get USB camera path. Run `v4l2-ctl --list-devices` to get a listing of locally-connected cameras available. (You may need to install `v4l-utils` in a way appropriate for your Linux distribution). In the sample configuration below, we use `video=0` to correlate with a detected device path of `/dev/video0`
|
||||
- Get USB camera formats & resolutions. Run `ffmpeg -f v4l2 -list_formats all -i /dev/video0` to get an idea of what formats and resolutions the USB Camera supports. In the sample configuration below, we use a width of 1024 and height of 576 in the stream and detection settings based on what was reported back.
|
||||
- If using Frigate in a container (e.g. Docker on TrueNAS), ensure you have USB Passthrough support enabled, along with a specific Host Device (`/dev/video0`) + Container Device (`/dev/video0`) listed.
|
||||
|
||||
- In your Frigate Configuration File, add the go2rtc stream and roles as appropriate:
|
||||
|
||||
```
|
||||
go2rtc:
|
||||
streams:
|
||||
usb_camera:
|
||||
- "ffmpeg:device?video=0&video_size=1024x576#video=h264"
|
||||
|
||||
cameras:
|
||||
usb_camera:
|
||||
enabled: true
|
||||
ffmpeg:
|
||||
inputs:
|
||||
- path: rtsp://127.0.0.1:8554/usb_camera
|
||||
input_args: preset-rtsp-restream
|
||||
roles:
|
||||
- detect
|
||||
- record
|
||||
detect:
|
||||
enabled: false # <---- disable detection until you have a working camera feed
|
||||
width: 1024
|
||||
height: 576
|
||||
```
|
||||
|
||||
|
||||
|
@ -100,6 +100,8 @@ This list of working and non-working PTZ cameras is based on user feedback.
|
||||
| Ctronics PTZ | ✅ | ❌ | |
|
||||
| Dahua | ✅ | ✅ | |
|
||||
| Dahua DH-SD2A500HB | ✅ | ❌ | |
|
||||
| Dahua DH-SD49825GB-HNR | ✅ | ✅ | |
|
||||
| Dahua DH-P5AE-PV | ❌ | ❌ | |
|
||||
| Foscam R5 | ✅ | ❌ | |
|
||||
| Hanwha XNP-6550RH | ✅ | ❌ | |
|
||||
| Hikvision | ✅ | ❌ | Incomplete ONVIF support (MoveStatus won't update even on latest firmware) - reported with HWP-N4215IH-DE and DS-2DE3304W-DE, but likely others |
|
||||
|
@ -15,9 +15,9 @@ Semantic Search must be enabled to use Generative AI.
|
||||
|
||||
## Configuration
|
||||
|
||||
Generative AI can be enabled for all cameras or only for specific cameras. There are currently 3 providers available to integrate with Frigate.
|
||||
Generative AI can be enabled for all cameras or only for specific cameras. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
|
||||
|
||||
If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
|
||||
To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
|
||||
|
||||
```yaml
|
||||
genai:
|
||||
@ -27,12 +27,23 @@ genai:
|
||||
model: gemini-1.5-flash
|
||||
|
||||
cameras:
|
||||
front_camera: ...
|
||||
front_camera:
|
||||
genai:
|
||||
enabled: True # <- enable GenAI for your front camera
|
||||
use_snapshot: True
|
||||
objects:
|
||||
- person
|
||||
required_zones:
|
||||
- steps
|
||||
indoor_camera:
|
||||
genai: # <- disable GenAI for your indoor camera
|
||||
enabled: False
|
||||
genai:
|
||||
enabled: False # <- disable GenAI for your indoor camera
|
||||
```
|
||||
|
||||
By default, descriptions will be generated for all tracked objects and all zones. But you can also optionally specify `objects` and `required_zones` to only generate descriptions for certain tracked objects or zones.
|
||||
|
||||
Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the uncompressed images from the `detect` stream collected over the object's lifetime to the model. Once the object lifecycle ends, only a single compressed and cropped thumbnail is saved with the tracked object. Using a snapshot might be useful when you want to _regenerate_ a tracked object's description as it will provide the AI with a higher-quality image (typically downscaled by the AI itself) than the cropped/compressed thumbnail. Using a snapshot otherwise has a trade-off in that only a single image is sent to your provider, which will limit the model's ability to determine object movement or direction.
|
||||
|
||||
## Ollama
|
||||
|
||||
:::warning
|
||||
@ -116,7 +127,7 @@ genai:
|
||||
model: gpt-4o
|
||||
```
|
||||
|
||||
::: note
|
||||
:::note
|
||||
|
||||
To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL.
|
||||
|
||||
@ -182,9 +193,7 @@ genai:
|
||||
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
|
||||
```
|
||||
|
||||
Prompts can also be overriden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire. By default, descriptions will be generated for all tracked objects and all zones. But you can also optionally specify `objects` and `required_zones` to only generate descriptions for certain tracked objects or zones.
|
||||
|
||||
Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the uncompressed images from the `detect` stream collected over the object's lifetime to the model. Once the object lifecycle ends, only a single compressed and cropped thumbnail is saved with the tracked object. Using a snapshot might be useful when you want to _regenerate_ a tracked object's description as it will provide the AI with a higher-quality image (typically downscaled by the AI itself) than the cropped/compressed thumbnail. Using a snapshot otherwise has a trade-off in that only a single image is sent to your provider, which will limit the model's ability to determine object movement or direction.
|
||||
Prompts can also be overriden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
|
||||
|
||||
```yaml
|
||||
cameras:
|
||||
|
@ -145,6 +145,6 @@ For devices that support two way talk, Frigate can be configured to use the feat
|
||||
|
||||
- Set up go2rtc with [WebRTC](#webrtc-extra-configuration).
|
||||
- Ensure you access Frigate via https (may require [opening port 8971](/frigate/installation/#ports)).
|
||||
- For the Home Assistant Frigate card, [follow the docs](https://github.com/dermotduffy/frigate-hass-card?tab=readme-ov-file#using-2-way-audio) for the correct source.
|
||||
- For the Home Assistant Frigate card, [follow the docs](http://card.camera/#/usage/2-way-audio) for the correct source.
|
||||
|
||||
To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-doorbell)
|
||||
|
@ -33,6 +33,14 @@ Frigate supports multiple different detectors that work on different types of ha
|
||||
|
||||
:::
|
||||
|
||||
:::note
|
||||
|
||||
Multiple detectors can not be mixed for object detection (ex: OpenVINO and Coral EdgeTPU can not be used for object detection at the same time).
|
||||
|
||||
This does not affect using hardware for accelerating other tasks such as [semantic search](./semantic_search.md)
|
||||
|
||||
:::
|
||||
|
||||
# Officially Supported Detectors
|
||||
|
||||
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8l`, `onnx`, `openvino`, `rknn`, `rocm`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
|
||||
@ -116,6 +124,30 @@ detectors:
|
||||
device: pci
|
||||
```
|
||||
|
||||
## Hailo-8l
|
||||
|
||||
This detector is available for use with Hailo-8 AI Acceleration Module.
|
||||
|
||||
See the [installation docs](../frigate/installation.md#hailo-8l) for information on configuring the hailo8.
|
||||
|
||||
### Configuration
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
hailo8l:
|
||||
type: hailo8l
|
||||
device: PCIe
|
||||
|
||||
model:
|
||||
width: 300
|
||||
height: 300
|
||||
input_tensor: nhwc
|
||||
input_pixel_format: bgr
|
||||
model_type: ssd
|
||||
path: /config/model_cache/h8l_cache/ssd_mobilenet_v1.hef
|
||||
```
|
||||
|
||||
|
||||
## OpenVINO Detector
|
||||
|
||||
The OpenVINO detector type runs an OpenVINO IR model on AMD and Intel CPUs, Intel GPUs and Intel VPU hardware. To configure an OpenVINO detector, set the `"type"` attribute to `"openvino"`.
|
||||
@ -295,6 +327,7 @@ detectors:
|
||||
|
||||
model:
|
||||
path: /config/model_cache/tensorrt/yolov7-320.trt
|
||||
labelmap_path: /labelmap/coco-80.txt
|
||||
input_tensor: nchw
|
||||
input_pixel_format: rgb
|
||||
width: 320
|
||||
@ -624,26 +657,3 @@ $ cat /sys/kernel/debug/rknpu/load
|
||||
|
||||
- All models are automatically downloaded and stored in the folder `config/model_cache/rknn_cache`. After upgrading Frigate, you should remove older models to free up space.
|
||||
- You can also provide your own `.rknn` model. You should not save your own models in the `rknn_cache` folder, store them directly in the `model_cache` folder or another subfolder. To convert a model to `.rknn` format see the `rknn-toolkit2` (requires a x86 machine). Note, that there is only post-processing for the supported models.
|
||||
|
||||
## Hailo-8l
|
||||
|
||||
This detector is available for use with Hailo-8 AI Acceleration Module.
|
||||
|
||||
See the [installation docs](../frigate/installation.md#hailo-8l) for information on configuring the hailo8.
|
||||
|
||||
### Configuration
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
hailo8l:
|
||||
type: hailo8l
|
||||
device: PCIe
|
||||
|
||||
model:
|
||||
width: 300
|
||||
height: 300
|
||||
input_tensor: nhwc
|
||||
input_pixel_format: bgr
|
||||
model_type: ssd
|
||||
path: /config/model_cache/h8l_cache/ssd_mobilenet_v1.hef
|
||||
```
|
||||
|
@ -20,5 +20,5 @@ In order to install Frigate as a PWA, the following requirements must be met:
|
||||
Installation varies slightly based on the device that is being used:
|
||||
|
||||
- Desktop: Use the install button typically found in right edge of the address bar
|
||||
- Android: Use the `Install as App` button in the more options menu
|
||||
- iOS: Use the `Add to Homescreen` button in the share menu
|
||||
- Android: Use the `Install as App` button in the more options menu for Chrome, and the `Add app to Home screen` button for Firefox
|
||||
- iOS: Use the `Add to Homescreen` button in the share menu
|
||||
|
@ -21,6 +21,21 @@ In 0.14 and later, all of that is bundled into a single review item which starts
|
||||
|
||||
Not every segment of video captured by Frigate may be of the same level of interest to you. Video of people who enter your property may be a different priority than those walking by on the sidewalk. For this reason, Frigate 0.14 categorizes review items as _alerts_ and _detections_. By default, all person and car objects are considered alerts. You can refine categorization of your review items by configuring required zones for them.
|
||||
|
||||
:::note
|
||||
|
||||
Alerts and detections categorize the tracked objects in review items, but Frigate must first detect those objects with your configured object detector (Coral, OpenVINO, etc). By default, the object tracker only detects `person`. Setting `labels` for `alerts` and `detections` does not automatically enable detection of new objects. To detect more than `person`, you should add the following to your config:
|
||||
|
||||
```yaml
|
||||
objects:
|
||||
track:
|
||||
- person
|
||||
- car
|
||||
- ...
|
||||
```
|
||||
|
||||
See the [objects documentation](objects.md) for the list of objects that Frigate's default model tracks.
|
||||
:::
|
||||
|
||||
## Restricting alerts to specific labels
|
||||
|
||||
By default a review item will only be marked as an alert if a person or car is detected. This can be configured to include any object or audio label using the following config:
|
||||
|
@ -36,8 +36,8 @@ Note that certbot uses symlinks, and those can't be followed by the container un
|
||||
frigate:
|
||||
...
|
||||
volumes:
|
||||
- /etc/letsencrypt/live/frigate:/etc/letsencrypt/live/frigate:ro
|
||||
- /etc/letsencrypt/archive/frigate:/etc/letsencrypt/archive/frigate:ro
|
||||
- /etc/letsencrypt/live/your.fqdn.net:/etc/letsencrypt/live/frigate:ro
|
||||
- /etc/letsencrypt/archive/your.fqdn.net:/etc/letsencrypt/archive/your.fqdn.net:ro
|
||||
...
|
||||
|
||||
```
|
||||
|
@ -3,7 +3,7 @@ id: camera_setup
|
||||
title: Camera setup
|
||||
---
|
||||
|
||||
Cameras configured to output H.264 video and AAC audio will offer the most compatibility with all features of Frigate and Home Assistant. H.265 has better compression, but less compatibility. Chrome 108+, Safari and Edge are the only browsers able to play H.265 and only support a limited number of H.265 profiles. Ideally, cameras should be configured directly for the desired resolutions and frame rates you want to use in Frigate. Reducing frame rates within Frigate will waste CPU resources decoding extra frames that are discarded. There are three different goals that you want to tune your stream configurations around.
|
||||
Cameras configured to output H.264 video and AAC audio will offer the most compatibility with all features of Frigate and Home Assistant. H.265 has better compression, but less compatibility. Firefox 134+/136+/137+ (Windows/Mac/Linux & Android), Chrome 108+, Safari and Edge are the only browsers able to play H.265 and only support a limited number of H.265 profiles. Ideally, cameras should be configured directly for the desired resolutions and frame rates you want to use in Frigate. Reducing frame rates within Frigate will waste CPU resources decoding extra frames that are discarded. There are three different goals that you want to tune your stream configurations around.
|
||||
|
||||
- **Detection**: This is the only stream that Frigate will decode for processing. Also, this is the stream where snapshots will be generated from. The resolution for detection should be tuned for the size of the objects you want to detect. See [Choosing a detect resolution](#choosing-a-detect-resolution) for more details. The recommended frame rate is 5fps, but may need to be higher (10fps is the recommended maximum for most users) for very fast moving objects. Higher resolutions and frame rates will drive higher CPU usage on your server.
|
||||
|
||||
|
@ -66,4 +66,4 @@ The time period starting when a tracked object entered the frame and ending when
|
||||
|
||||
## Zone
|
||||
|
||||
Zones are areas of interest, zones can be used for notifications and for limiting the areas where Frigate will create an [event](#event). [See the zone docs for more info](/configuration/zones)
|
||||
Zones are areas of interest, zones can be used for notifications and for limiting the areas where Frigate will create a [review item](#review-item). [See the zone docs for more info](/configuration/zones)
|
||||
|
@ -9,24 +9,36 @@ Cameras that output H.264 video and AAC audio will offer the most compatibility
|
||||
|
||||
I recommend Dahua, Hikvision, and Amcrest in that order. Dahua edges out Hikvision because they are easier to find and order, not because they are better cameras. I personally use Dahua cameras because they are easier to purchase directly. In my experience Dahua and Hikvision both have multiple streams with configurable resolutions and frame rates and rock solid streams. They also both have models with large sensors well known for excellent image quality at night. Not all the models are equal. Larger sensors are better than higher resolutions; especially at night. Amcrest is the fallback recommendation because they are rebranded Dahuas. They are rebranding the lower end models with smaller sensors or less configuration options.
|
||||
|
||||
Many users have reported various issues with Reolink cameras, so I do not recommend them. If you are using Reolink, I suggest the [Reolink specific configuration](../configuration/camera_specific.md#reolink-cameras). Wifi cameras are also not recommended. Their streams are less reliable and cause connection loss and/or lost video data.
|
||||
WiFi cameras are not recommended as [their streams are less reliable and cause connection loss and/or lost video data](https://ipcamtalk.com/threads/camera-conflicts.68142/#post-738821), especially when more than a few WiFi cameras will be used at the same time.
|
||||
|
||||
Here are some of the camera's I recommend:
|
||||
Many users have reported various issues with 4K-plus Reolink cameras, it is best to stick with 5MP and lower for Reolink cameras. If you are using Reolink, I suggest the [Reolink specific configuration](../configuration/camera_specific.md#reolink-cameras).
|
||||
|
||||
- <a href="https://amzn.to/3uFLtxB" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) T5442TM-AS-LED</a> (affiliate link)
|
||||
- <a href="https://amzn.to/3isJ3gU" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) IPC-T5442TM-AS</a> (affiliate link)
|
||||
- <a href="https://amzn.to/2ZWNWIA" target="_blank" rel="nofollow noopener sponsored">Amcrest IP5M-T1179EW-28MM</a> (affiliate link)
|
||||
Here are some of the cameras I recommend:
|
||||
|
||||
- <a href="https://amzn.to/4fwoNWA" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) IPC-T549M-ALED-S3</a> (affiliate link)
|
||||
- <a href="https://amzn.to/3YXpcMw" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) IPC-T54IR-AS</a> (affiliate link)
|
||||
- <a href="https://amzn.to/3AvBHoY" target="_blank" rel="nofollow noopener sponsored">Amcrest IP5M-T1179EW-AI-V3</a> (affiliate link)
|
||||
- <a href="https://amzn.to/4ltOpaC" target="_blank" rel="nofollow noopener sponsored">HIKVISION DS-2CD2387G2P-LSU/SL ColorVu 8MP Panoramic Turret IP Camera</a> (affiliate link)
|
||||
|
||||
I may earn a small commission for my endorsement, recommendation, testimonial, or link to any products or services from this website.
|
||||
|
||||
## Server
|
||||
|
||||
My current favorite is the Beelink EQ12 because of the efficient N100 CPU and dual NICs that allow you to setup a dedicated private network for your cameras where they can be blocked from accessing the internet. There are many used workstation options on eBay that work very well. Anything with an Intel CPU and capable of running Debian should work fine. As a bonus, you may want to look for devices with a M.2 or PCIe express slot that is compatible with the Google Coral. I may earn a small commission for my endorsement, recommendation, testimonial, or link to any products or services from this website.
|
||||
My current favorite is the Beelink EQ13 because of the efficient N100 CPU and dual NICs that allow you to setup a dedicated private network for your cameras where they can be blocked from accessing the internet. There are many used workstation options on eBay that work very well. Anything with an Intel CPU and capable of running Debian should work fine. As a bonus, you may want to look for devices with a M.2 or PCIe express slot that is compatible with the Google Coral, Hailo, or other AI accelerators.
|
||||
|
||||
| Name | Coral Inference Speed | Coral Compatibility | Notes |
|
||||
| ------------------------------------------------------------------------------------------------------------- | --------------------- | ------------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| Beelink EQ12 (<a href="https://amzn.to/3OlTMJY" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 5-10ms | USB | Dual gigabit NICs for easy isolated camera network. Easily handles several 1080p cameras. |
|
||||
| Intel NUC (<a href="https://amzn.to/3psFlHi" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 5-10ms | USB | Overkill for most, but great performance. Can handle many cameras at 5fps depending on typical amounts of motion. Requires extra parts. |
|
||||
Note that many of these mini PCs come with Windows pre-installed, and you will need to install Linux according to the [getting started guide](../guides/getting_started.md).
|
||||
|
||||
I may earn a small commission for my endorsement, recommendation, testimonial, or link to any products or services from this website.
|
||||
|
||||
:::warning
|
||||
|
||||
If the EQ13 is out of stock, the link below may take you to a suggested alternative on Amazon. The Beelink EQ14 has some known compatibility issues, so you should avoid that model for now.
|
||||
|
||||
:::
|
||||
|
||||
| Name | Coral Inference Speed | Coral Compatibility | Notes |
|
||||
| ------------------------------------------------------------------------------------------------------------- | --------------------- | ------------------- | ----------------------------------------------------------------------------------------- |
|
||||
| Beelink EQ13 (<a href="https://amzn.to/4jn2qVr" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 5-10ms | USB | Dual gigabit NICs for easy isolated camera network. Easily handles several 1080p cameras. |
|
||||
|
||||
## Detectors
|
||||
|
||||
@ -52,24 +64,26 @@ The OpenVINO detector type is able to run on:
|
||||
|
||||
More information is available [in the detector docs](/configuration/object_detectors#openvino-detector)
|
||||
|
||||
Inference speeds vary greatly depending on the CPU, GPU, or VPU used, some known examples are below:
|
||||
Inference speeds vary greatly depending on the CPU or GPU used, some known examples of GPU inference times are below:
|
||||
|
||||
| Name | Inference Speed | Notes |
|
||||
| -------------------- | --------------- | --------------------------------------------------------------------- |
|
||||
| Intel NCS2 VPU | 60 - 65 ms | May vary based on host device |
|
||||
| Intel Celeron J4105 | ~ 25 ms | Inference speeds on CPU were 150 - 200 ms |
|
||||
| Intel Celeron N3060 | 130 - 150 ms | Inference speeds on CPU were ~ 550 ms |
|
||||
| Intel Celeron N3205U | ~ 120 ms | Inference speeds on CPU were ~ 380 ms |
|
||||
| Intel Celeron N4020 | 50 - 200 ms | Inference speeds on CPU were ~ 800 ms, greatly depends on other loads |
|
||||
| Intel i3 6100T | 15 - 35 ms | Inference speeds on CPU were 60 - 120 ms |
|
||||
| Intel i3 8100 | ~ 15 ms | Inference speeds on CPU were ~ 65 ms |
|
||||
| Intel i5 4590 | ~ 20 ms | Inference speeds on CPU were ~ 230 ms |
|
||||
| Intel i5 6500 | ~ 15 ms | Inference speeds on CPU were ~ 150 ms |
|
||||
| Intel i5 7200u | 15 - 25 ms | Inference speeds on CPU were ~ 150 ms |
|
||||
| Intel i5 7500 | ~ 15 ms | Inference speeds on CPU were ~ 260 ms |
|
||||
| Intel i5 1135G7 | 10 - 15 ms | |
|
||||
| Intel i5 12600K | ~ 15 ms | Inference speeds on CPU were ~ 35 ms |
|
||||
| Intel Arc A750 | ~ 4 ms | |
|
||||
| Name | MobileNetV2 Inference Time | YOLO-NAS Inference Time | Notes |
|
||||
| --------------------- | --------------------------- | --------------------------- | --------------------------------------- |
|
||||
| Intel Arc A750 | ~ 4 ms | 320: ~ 8 ms | |
|
||||
| Intel Arc A380 | ~ 6 ms | 320: ~ 10 ms | |
|
||||
| Intel Ultra 5 125H | | 320: ~ 10 ms 640: ~ 22 ms | |
|
||||
| Intel i5 12600K | ~ 15 ms | 320: ~ 20 ms 640: ~ 46 ms | |
|
||||
| Intel i3 12000 | | 320: ~ 19 ms 640: ~ 54 ms | |
|
||||
| Intel i5 1135G7 | 10 - 15 ms | | |
|
||||
| Intel i5 7500 | ~ 15 ms | | |
|
||||
| Intel i5 7200u | 15 - 25 ms | | |
|
||||
| Intel i5 6500 | ~ 15 ms | | |
|
||||
| Intel i5 4590 | ~ 20 ms | | |
|
||||
| Intel i3 8100 | ~ 15 ms | | |
|
||||
| Intel i3 6100T | 15 - 35 ms | | Can only run one detector instance |
|
||||
| Intel Celeron N4020 | 50 - 200 ms | | Inference speed depends on other loads |
|
||||
| Intel Celeron N3205U | ~ 120 ms | | Can only run one detector instance |
|
||||
| Intel Celeron N3060 | 130 - 150 ms | | Can only run one detector instance |
|
||||
| Intel Celeron J4105 | ~ 25 ms | | Can only run one |
|
||||
|
||||
### TensorRT - Nvidia GPU
|
||||
|
||||
@ -78,29 +92,35 @@ The TensortRT detector is able to run on x86 hosts that have an Nvidia GPU which
|
||||
Inference speeds will vary greatly depending on the GPU and the model used.
|
||||
`tiny` variants are faster than the equivalent non-tiny model, some known examples are below:
|
||||
|
||||
| Name | Inference Speed |
|
||||
| --------------- | --------------- |
|
||||
| GTX 1060 6GB | ~ 7 ms |
|
||||
| GTX 1070 | ~ 6 ms |
|
||||
| GTX 1660 SUPER | ~ 4 ms |
|
||||
| RTX 3050 | 5 - 7 ms |
|
||||
| RTX 3070 Mobile | ~ 5 ms |
|
||||
| Quadro P400 2GB | 20 - 25 ms |
|
||||
| Quadro P2000 | ~ 12 ms |
|
||||
| Name | YoloV7 Inference Time | YOLO-NAS Inference Time |
|
||||
| --------------- | ---------------------- | --------------------------- |
|
||||
| Quadro P2000 | ~ 12 ms | |
|
||||
| Quadro P400 2GB | 20 - 25 ms | |
|
||||
| RTX 3070 Mobile | ~ 5 ms | |
|
||||
| RTX 3050 | 5 - 7 ms | 320: ~ 10 ms 640: ~ 16 ms |
|
||||
| GTX 1660 SUPER | ~ 4 ms | |
|
||||
| GTX 1070 | ~ 6 ms | |
|
||||
| GTX 1060 6GB | ~ 7 ms | |
|
||||
|
||||
#### AMD GPUs
|
||||
### AMD GPUs
|
||||
|
||||
With the [rocm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many AMD GPUs.
|
||||
With the [rocm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many discrete AMD GPUs.
|
||||
|
||||
### Community Supported:
|
||||
### Hailo-8l PCIe
|
||||
|
||||
#### Nvidia Jetson
|
||||
Frigate supports the Hailo-8l M.2 card on any hardware but currently it is only tested on the Raspberry Pi5 PCIe hat from the AI kit.
|
||||
|
||||
The inference time for the Hailo-8L chip at time of writing is around 17-21 ms for the SSD MobileNet Version 1 model.
|
||||
|
||||
## Community Supported Detectors
|
||||
|
||||
### Nvidia Jetson
|
||||
|
||||
Frigate supports all Jetson boards, from the inexpensive Jetson Nano to the powerful Jetson Orin AGX. It will [make use of the Jetson's hardware media engine](/configuration/hardware_acceleration#nvidia-jetson-orin-agx-orin-nx-orin-nano-xavier-agx-xavier-nx-tx2-tx1-nano) when configured with the [appropriate presets](/configuration/ffmpeg_presets#hwaccel-presets), and will make use of the Jetson's GPU and DLA for object detection when configured with the [TensorRT detector](/configuration/object_detectors#nvidia-tensorrt-detector).
|
||||
|
||||
Inference speed will vary depending on the YOLO model, jetson platform and jetson nvpmodel (GPU/DLA/EMC clock speed). It is typically 20-40 ms for most models. The DLA is more efficient than the GPU, but not faster, so using the DLA will reduce power consumption but will slightly increase inference time.
|
||||
|
||||
#### Rockchip platform
|
||||
### Rockchip platform
|
||||
|
||||
Frigate supports hardware video processing on all Rockchip boards. However, hardware object detection is only supported on these boards:
|
||||
|
||||
@ -112,12 +132,6 @@ Frigate supports hardware video processing on all Rockchip boards. However, hard
|
||||
|
||||
The inference time of a rk3588 with all 3 cores enabled is typically 25-30 ms for yolo-nas s.
|
||||
|
||||
#### Hailo-8l PCIe
|
||||
|
||||
Frigate supports the Hailo-8l M.2 card on any hardware but currently it is only tested on the Raspberry Pi5 PCIe hat from the AI kit.
|
||||
|
||||
The inference time for the Hailo-8L chip at time of writing is around 17-21 ms for the SSD MobileNet Version 1 model.
|
||||
|
||||
## What does Frigate use the CPU for and what does it use a detector for? (ELI5 Version)
|
||||
|
||||
This is taken from a [user question on reddit](https://www.reddit.com/r/homeassistant/comments/q8mgau/comment/hgqbxh5/?utm_source=share&utm_medium=web2x&context=3). Modified slightly for clarity.
|
||||
@ -138,4 +152,4 @@ Basically - When you increase the resolution and/or the frame rate of the stream
|
||||
|
||||
YES! The Coral does not help with decoding video streams.
|
||||
|
||||
Decompressing video streams takes a significant amount of CPU power. Video compression uses key frames (also known as I-frames) to send a full frame in the video stream. The following frames only include the difference from the key frame, and the CPU has to compile each frame by merging the differences with the key frame. [More detailed explanation](https://blog.video.ibm.com/streaming-video-tips/keyframes-interframe-video-compression/). Higher resolutions and frame rates mean more processing power is needed to decode the video stream, so try and set them on the camera to avoid unnecessary decoding work.
|
||||
Decompressing video streams takes a significant amount of CPU power. Video compression uses key frames (also known as I-frames) to send a full frame in the video stream. The following frames only include the difference from the key frame, and the CPU has to compile each frame by merging the differences with the key frame. [More detailed explanation](https://support.video.ibm.com/hc/en-us/articles/18106203580316-Keyframes-InterFrame-Video-Compression). Higher resolutions and frame rates mean more processing power is needed to decode the video stream, so try and set them on the camera to avoid unnecessary decoding work.
|
||||
|
@ -111,7 +111,7 @@ For Raspberry Pi 5 users with the AI Kit, installation is straightforward. Simpl
|
||||
For other installations, follow these steps for installation:
|
||||
|
||||
1. Install the driver from the [Hailo GitHub repository](https://github.com/hailo-ai/hailort-drivers). A convenient script for Linux is available to clone the repository, build the driver, and install it.
|
||||
2. Copy or download [this script](https://github.com/blakeblackshear/frigate/blob/41c9b13d2fffce508b32dfc971fa529b49295fbd/docker/hailo8l/user_installation.sh).
|
||||
2. Copy or download [this script](https://github.com/blakeblackshear/frigate/blob/dev/docker/hailo8l/user_installation.sh).
|
||||
3. Ensure it has execution permissions with `sudo chmod +x user_installation.sh`
|
||||
4. Run the script with `./user_installation.sh`
|
||||
|
||||
|
119
docs/docs/frigate/updating.md
Normal file
119
docs/docs/frigate/updating.md
Normal file
@ -0,0 +1,119 @@
|
||||
---
|
||||
id: updating
|
||||
title: Updating
|
||||
---
|
||||
|
||||
# Updating Frigate
|
||||
|
||||
The current stable version of Frigate is **0.15.0**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.15.0).
|
||||
|
||||
Keeping Frigate up to date ensures you benefit from the latest features, performance improvements, and bug fixes. The update process varies slightly depending on your installation method (Docker, Home Assistant Addon, etc.). Below are instructions for the most common setups.
|
||||
|
||||
## Before You Begin
|
||||
|
||||
- **Stop Frigate**: For most methods, you’ll need to stop the running Frigate instance before backing up and updating.
|
||||
- **Backup Your Configuration**: Always back up your `/config` directory (e.g., `config.yml` and `frigate.db`, the SQLite database) before updating. This ensures you can roll back if something goes wrong.
|
||||
- **Check Release Notes**: Carefully review the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases) for breaking changes or configuration updates that might affect your setup.
|
||||
|
||||
## Updating with Docker
|
||||
|
||||
If you’re running Frigate via Docker (recommended method), follow these steps:
|
||||
|
||||
1. **Stop the Container**:
|
||||
|
||||
- If using Docker Compose:
|
||||
```bash
|
||||
docker compose down frigate
|
||||
```
|
||||
- If using `docker run`:
|
||||
```bash
|
||||
docker stop frigate
|
||||
```
|
||||
|
||||
2. **Update and Pull the Latest Image**:
|
||||
|
||||
- If using Docker Compose:
|
||||
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.15.0` instead of `0.14.1`). For example:
|
||||
```yaml
|
||||
services:
|
||||
frigate:
|
||||
image: ghcr.io/blakeblackshear/frigate:0.15.0
|
||||
```
|
||||
- Then pull the image:
|
||||
```bash
|
||||
docker pull ghcr.io/blakeblackshear/frigate:0.15.0
|
||||
```
|
||||
- **Note for `stable` Tag Users**: If your `docker-compose.yml` uses the `stable` tag (e.g., `ghcr.io/blakeblackshear/frigate:stable`), you don’t need to update the tag manually. The `stable` tag always points to the latest stable release after pulling.
|
||||
- If using `docker run`:
|
||||
- Pull the image with the appropriate tag (e.g., `0.15.0`, `0.15.0-tensorrt`, or `stable`):
|
||||
```bash
|
||||
docker pull ghcr.io/blakeblackshear/frigate:0.15.0
|
||||
```
|
||||
|
||||
3. **Start the Container**:
|
||||
|
||||
- If using Docker Compose:
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
- If using `docker run`, re-run your original command (e.g., from the [Installation](./installation.md#docker) section) with the updated image tag.
|
||||
|
||||
4. **Verify the Update**:
|
||||
- Check the container logs to ensure Frigate starts successfully:
|
||||
```bash
|
||||
docker logs frigate
|
||||
```
|
||||
- Visit the Frigate Web UI (default: `http://<your-ip>:5000`) to confirm the new version is running. The version number is displayed at the top of the System Metrics page.
|
||||
|
||||
### Notes
|
||||
|
||||
- If you’ve customized other settings (e.g., `shm-size`), ensure they’re still appropriate after the update.
|
||||
- Docker will automatically use the updated image when you restart the container, as long as you pulled the correct version.
|
||||
|
||||
## Updating the Home Assistant Addon
|
||||
|
||||
For users running Frigate as a Home Assistant Addon:
|
||||
|
||||
1. **Check for Updates**:
|
||||
|
||||
- Navigate to **Settings > Add-ons** in Home Assistant.
|
||||
- Find your installed Frigate addon (e.g., "Frigate NVR" or "Frigate NVR (Full Access)").
|
||||
- If an update is available, you’ll see an "Update" button.
|
||||
|
||||
2. **Update the Addon**:
|
||||
|
||||
- Click the "Update" button next to the Frigate addon.
|
||||
- Wait for the process to complete. Home Assistant will handle downloading and installing the new version.
|
||||
|
||||
3. **Restart the Addon**:
|
||||
|
||||
- After updating, go to the addon’s page and click "Restart" to apply the changes.
|
||||
|
||||
4. **Verify the Update**:
|
||||
- Check the addon logs (under the "Log" tab) to ensure Frigate starts without errors.
|
||||
- Access the Frigate Web UI to confirm the new version is running.
|
||||
|
||||
### Notes
|
||||
|
||||
- Ensure your `/config/frigate.yml` is compatible with the new version by reviewing the [Release notes](https://github.com/blakeblackshear/frigate/releases).
|
||||
- If using custom hardware (e.g., Coral or GPU), verify that configurations still work, as addon updates don’t modify your hardware settings.
|
||||
|
||||
## Rolling Back
|
||||
|
||||
If an update causes issues:
|
||||
|
||||
1. Stop Frigate.
|
||||
2. Restore your backed-up config file and database.
|
||||
3. Revert to the previous image version:
|
||||
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.14.1`) in your `docker run` command.
|
||||
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.14.1`), and re-run `docker compose up -d`.
|
||||
- For Home Assistant: Reinstall the previous addon version manually via the repository if needed and restart the addon.
|
||||
4. Verify the old version is running again.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- **Container Fails to Start**: Check logs (`docker logs frigate`) for errors.
|
||||
- **UI Not Loading**: Ensure ports (e.g., 5000, 8971) are still mapped correctly and the service is running.
|
||||
- **Hardware Issues**: Revisit hardware-specific setup (e.g., Coral, GPU) if detection or decoding fails post-update.
|
||||
|
||||
Common questions are often answered in the [FAQ](https://github.com/blakeblackshear/frigate/discussions), pinned at the top of the support discussions.
|
@ -7,7 +7,7 @@ title: Configuring go2rtc
|
||||
|
||||
Use of the bundled go2rtc is optional. You can still configure FFmpeg to connect directly to your cameras. However, adding go2rtc to your configuration is required for the following features:
|
||||
|
||||
- WebRTC or MSE for live viewing with higher resolutions and frame rates than the jsmpeg stream which is limited to the detect stream
|
||||
- WebRTC or MSE for live viewing with audio, higher resolutions and frame rates than the jsmpeg stream which is limited to the detect stream and does not support audio
|
||||
- Live stream support for cameras in Home Assistant Integration
|
||||
- RTSP relay for use with other consumers to reduce the number of connections to your camera streams
|
||||
|
||||
|
@ -35,6 +35,7 @@ There are many solutions available to implement reverse proxies and the communit
|
||||
* [Apache2](#apache2-reverse-proxy)
|
||||
* [Nginx](#nginx-reverse-proxy)
|
||||
* [Traefik](#traefik-reverse-proxy)
|
||||
* [Caddy](#caddy-reverse-proxy)
|
||||
|
||||
## Apache2 Reverse Proxy
|
||||
|
||||
@ -117,7 +118,8 @@ server {
|
||||
set $port 8971;
|
||||
|
||||
listen 80;
|
||||
listen 443 ssl http2;
|
||||
listen 443 ssl;
|
||||
http2 on;
|
||||
|
||||
server_name frigate.domain.com;
|
||||
}
|
||||
@ -177,3 +179,33 @@ The above configuration will create a "service" in Traefik, automatically adding
|
||||
It will also add a router, routing requests to "traefik.example.com" to your local container.
|
||||
|
||||
Note that with this approach, you don't need to expose any ports for the Frigate instance since all traffic will be routed over the internal Docker network.
|
||||
|
||||
## Caddy Reverse Proxy
|
||||
|
||||
This example shows Frigate running under a subdomain with logging and a tls cert (in this case a wildcard domain cert obtained independently of caddy) handled via imports
|
||||
|
||||
```caddy
|
||||
(logging) {
|
||||
log {
|
||||
output file /var/log/caddy/{args[0]}.log {
|
||||
roll_size 10MiB
|
||||
roll_keep 5
|
||||
roll_keep_for 10d
|
||||
}
|
||||
format json
|
||||
level INFO
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
(tls) {
|
||||
tls /var/lib/caddy/wildcard.YOUR_DOMAIN.TLD.fullchain.pem /var/lib/caddy/wildcard.YOUR_DOMAIN.TLD.privkey.pem
|
||||
}
|
||||
|
||||
frigate.YOUR_DOMAIN.TLD {
|
||||
reverse_proxy http://localhost:8971
|
||||
import tls
|
||||
import logging frigate.YOUR_DOMAIN.TLD
|
||||
}
|
||||
|
||||
```
|
||||
|
@ -97,13 +97,13 @@ services:
|
||||
|
||||
If you are using HassOS with the addon, the URL should be one of the following depending on which addon version you are using. Note that if you are using the Proxy Addon, you do NOT point the integration at the proxy URL. Just enter the URL used to access Frigate directly from your network.
|
||||
|
||||
| Addon Version | URL |
|
||||
| ------------------------------ | -------------------------------------- |
|
||||
| Frigate NVR | `http://ccab4aaf-frigate:5000` |
|
||||
| Frigate NVR (Full Access) | `http://ccab4aaf-frigate-fa:5000` |
|
||||
| Frigate NVR Beta | `http://ccab4aaf-frigate-beta:5000` |
|
||||
| Frigate NVR Beta (Full Access) | `http://ccab4aaf-frigate-fa-beta:5000` |
|
||||
| Frigate NVR HailoRT Beta | `http://ccab4aaf-frigate-hailo-beta:5000` |
|
||||
| Addon Version | URL |
|
||||
| ------------------------------ | ----------------------------------------- |
|
||||
| Frigate NVR | `http://ccab4aaf-frigate:5000` |
|
||||
| Frigate NVR (Full Access) | `http://ccab4aaf-frigate-fa:5000` |
|
||||
| Frigate NVR Beta | `http://ccab4aaf-frigate-beta:5000` |
|
||||
| Frigate NVR Beta (Full Access) | `http://ccab4aaf-frigate-fa-beta:5000` |
|
||||
| Frigate NVR HailoRT Beta | `http://ccab4aaf-frigate-hailo-beta:5000` |
|
||||
|
||||
### Frigate running on a separate machine
|
||||
|
||||
@ -113,6 +113,14 @@ If you run Frigate on a separate device within your local network, Home Assistan
|
||||
|
||||
Use `http://<frigate_device_ip>:8971` as the URL for the integration so that authentication is required.
|
||||
|
||||
:::tip
|
||||
|
||||
The above URL assumes you have [disabled TLS](../configuration/tls).
|
||||
By default, TLS is enabled and Frigate will be using a self-signed certificate. HomeAssistant will fail to connect HTTPS to port 8971 since it fails to verify the self-signed certificate.
|
||||
Either disable TLS and use HTTP from HomeAssistant, or configure Frigate to be acessible with a valid certificate.
|
||||
|
||||
:::
|
||||
|
||||
```yaml
|
||||
services:
|
||||
frigate:
|
||||
@ -301,3 +309,7 @@ which server they are referring to.
|
||||
#### If I am detecting multiple objects, how do I assign the correct `binary_sensor` to the camera in HomeKit?
|
||||
|
||||
The [HomeKit integration](https://www.home-assistant.io/integrations/homekit/) randomly links one of the binary sensors (motion sensor entities) grouped with the camera device in Home Assistant. You can specify a `linked_motion_sensor` in the Home Assistant [HomeKit configuration](https://www.home-assistant.io/integrations/homekit/#linked_motion_sensor) for each camera.
|
||||
|
||||
#### I have set up automations based on the occupancy sensors. Sometimes the automation runs because the sensors are turned on, but then I look at Frigate I can't find the object that triggered the sensor. Is this a bug?
|
||||
|
||||
No. The occupancy sensors have fewer checks in place because they are often used for things like turning the lights on where latency needs to be as low as possible. So false positives can sometimes trigger these sensors. If you want false positive filtering, you should use an mqtt sensor on the `frigate/events` or `frigate/reviews` topic.
|
||||
|
@ -28,7 +28,14 @@ Message published for each changed tracked object. The first message is publishe
|
||||
"id": "1607123955.475377-mxklsc",
|
||||
"camera": "front_door",
|
||||
"frame_time": 1607123961.837752,
|
||||
"snapshot_time": 1607123961.837752,
|
||||
"snapshot": {
|
||||
"frame_time": 1607123965.975463,
|
||||
"box": [415, 489, 528, 700],
|
||||
"area": 12728,
|
||||
"region": [260, 446, 660, 846],
|
||||
"score": 0.77546,
|
||||
"attributes": [],
|
||||
},
|
||||
"label": "person",
|
||||
"sub_label": null,
|
||||
"top_score": 0.958984375,
|
||||
@ -58,7 +65,14 @@ Message published for each changed tracked object. The first message is publishe
|
||||
"id": "1607123955.475377-mxklsc",
|
||||
"camera": "front_door",
|
||||
"frame_time": 1607123962.082975,
|
||||
"snapshot_time": 1607123961.837752,
|
||||
"snapshot": {
|
||||
"frame_time": 1607123965.975463,
|
||||
"box": [415, 489, 528, 700],
|
||||
"area": 12728,
|
||||
"region": [260, 446, 660, 846],
|
||||
"score": 0.77546,
|
||||
"attributes": [],
|
||||
},
|
||||
"label": "person",
|
||||
"sub_label": ["John Smith", 0.79],
|
||||
"top_score": 0.958984375,
|
||||
|
@ -43,7 +43,7 @@ Snapshots must be enabled to be able to submit examples to Frigate+
|
||||
|
||||
### Annotate and verify
|
||||
|
||||
You can view all of your submitted images at [https://plus.frigate.video](https://plus.frigate.video). Annotations can be added by clicking an image. For more detailed information about labeling, see the documentation on [improving your model](../plus/improving_model.md).
|
||||
You can view all of your submitted images at [https://plus.frigate.video](https://plus.frigate.video). Annotations can be added by clicking an image. For more detailed information about labeling, see the documentation on [annotating](../plus/annotating.md).
|
||||
|
||||

|
||||
|
||||
|
@ -13,12 +13,20 @@ Please use your own knowledge to assess and vet them before you install anything
|
||||
|
||||
:::
|
||||
|
||||
## [Advanced Camera Card (formerly known as Frigate Card](https://card.camera/#/README)
|
||||
|
||||
The [Advanced Camera Card](https://card.camera/#/README) is a Home Assistant dashboard card with deep Frigate integration.
|
||||
|
||||
## [Double Take](https://github.com/skrashevich/double-take)
|
||||
|
||||
[Double Take](https://github.com/skrashevich/double-take) provides an unified UI and API for processing and training images for facial recognition.
|
||||
It supports automatically setting the sub labels in Frigate for person objects that are detected and recognized.
|
||||
This is a fork (with fixed errors and new features) of [original Double Take](https://github.com/jakowenko/double-take) project which, unfortunately, isn't being maintained by author.
|
||||
|
||||
## [Frigate Notify](https://github.com/0x2142/frigate-notify)
|
||||
|
||||
[Frigate Notify](https://github.com/0x2142/frigate-notify) is a simple app designed to send notifications from Frigate NVR to your favorite platforms. Intended to be used with standalone Frigate installations - Home Assistant not required, MQTT is optional but recommended.
|
||||
|
||||
## [Frigate telegram](https://github.com/OldTyT/frigate-telegram)
|
||||
|
||||
[Frigate telegram](https://github.com/OldTyT/frigate-telegram) makes it possible to send events from Frigate to Telegram. Events are sent as a message with a text description, video, and thumbnail.
|
||||
|
52
docs/docs/plus/annotating.md
Normal file
52
docs/docs/plus/annotating.md
Normal file
@ -0,0 +1,52 @@
|
||||
---
|
||||
id: annotating
|
||||
title: Annotating your images
|
||||
---
|
||||
|
||||
For the best results, follow these guidelines. You may also want to review the documentation on [improving your model](./index.md#improving-your-model).
|
||||
|
||||
**Label every object in the image**: It is important that you label all objects in each image before verifying. If you don't label a car for example, the model will be taught that part of the image is _not_ a car and it will start to get confused. You can exclude labels that you don't want detected on any of your cameras.
|
||||
|
||||
**Make tight bounding boxes**: Tighter bounding boxes improve the recognition and ensure that accurate bounding boxes are predicted at runtime.
|
||||
|
||||
**Label the full object even when occluded**: If you have a person standing behind a car, label the full person even though a portion of their body may be hidden behind the car. This helps predict accurate bounding boxes and improves zone accuracy and filters at runtime. If an object is partly out of frame, label it only when a person would reasonably be able to recognize the object from the visible parts.
|
||||
|
||||
**Label objects hard to identify as difficult**: When objects are truly difficult to make out, such as a car barely visible through a bush, or a dog that is hard to distinguish from the background at night, flag it as 'difficult'. This is not used in the model training as of now, but will in the future.
|
||||
|
||||
**Delivery logos such as `amazon`, `ups`, and `fedex` should label the logo**: For a Fedex truck, label the truck as a `car` and make a different bounding box just for the Fedex logo. If there are multiple logos, label each of them.
|
||||
|
||||

|
||||
|
||||
## AI suggested labels
|
||||
|
||||
If you have an active Frigate+ subscription, new uploads will be scanned for the objects configured for you camera and you will see suggested labels as light blue boxes when annotating in Frigate+. These suggestions are processed via a queue and typically complete within a minute after uploading, but processing times can be longer.
|
||||
|
||||

|
||||
|
||||
Suggestions are converted to labels when saving, so you should remove any errant suggestions. There is already some logic designed to avoid duplicate labels, but you may still occasionally see some duplicate suggestions. You should keep the most accurate bounding box and delete any duplicates so that you have just one label per object remaining.
|
||||
|
||||
## False positive labels
|
||||
|
||||
False positives will be shown with a read box and the label will have a strike through. These can't be adjusted, but they can be deleted if you accidentally submit a true positive as a false positive from Frigate.
|
||||

|
||||
|
||||
Misidentified objects should have a correct label added. For example, if a person was mistakenly detected as a cat, you should submit it as a false positive in Frigate and add a label for the person. The boxes will overlap.
|
||||
|
||||

|
||||
|
||||
## Shortcuts for a faster workflow
|
||||
|
||||
| Shortcut Key | Description |
|
||||
| ----------------- | ----------------------------- |
|
||||
| `?` | Show all keyboard shortcuts |
|
||||
| `w` | Add box |
|
||||
| `d` | Toggle difficult |
|
||||
| `s` | Switch to the next label |
|
||||
| `tab` | Select next largest box |
|
||||
| `del` | Delete current box |
|
||||
| `esc` | Deselect/Cancel |
|
||||
| `← ↑ → ↓` | Move box |
|
||||
| `Shift + ← ↑ → ↓` | Resize box |
|
||||
| `scrollwheel` | Zoom in/out |
|
||||
| `f` | Hide/show all but current box |
|
||||
| `spacebar` | Verify and save |
|
@ -5,15 +5,15 @@ title: Requesting your first model
|
||||
|
||||
## Step 1: Upload and annotate your images
|
||||
|
||||
Before requesting your first model, you will need to upload and verify at least 1 image to Frigate+. The more images you upload, annotate, and verify the better your results will be. Most users start to see very good results once they have at least 100 verified images per camera. Keep in mind that varying conditions should be included. You will want images from cloudy days, sunny days, dawn, dusk, and night. Refer to the [integration docs](../integrations/plus.md#generate-an-api-key) for instructions on how to easily submit images to Frigate+ directly from Frigate.
|
||||
Before requesting your first model, you will need to upload and verify at least 10 images to Frigate+. The more images you upload, annotate, and verify the better your results will be. Most users start to see very good results once they have at least 100 verified images per camera. Keep in mind that varying conditions should be included. You will want images from cloudy days, sunny days, dawn, dusk, and night. Refer to the [integration docs](../integrations/plus.md#generate-an-api-key) for instructions on how to easily submit images to Frigate+ directly from Frigate.
|
||||
|
||||
It is recommended to submit **both** true positives and false positives. This will help the model differentiate between what is and isn't correct. You should aim for a target of 80% true positive submissions and 20% false positives across all of your images. If you are experiencing false positives in a specific area, submitting true positives for any object type near that area in similar lighting conditions will help teach the model what that area looks like when no objects are present.
|
||||
|
||||
For more detailed recommendations, you can refer to the docs on [improving your model](./improving_model.md).
|
||||
For more detailed recommendations, you can refer to the docs on [annotating](./annotating.md).
|
||||
|
||||
## Step 2: Submit a model request
|
||||
|
||||
Once you have an initial set of verified images, you can request a model on the Models page. For guidance on choosing a model type, refer to [this part of the documentation](./index.md#available-model-types). Each model request requires 1 of the 12 trainings that you receive with your annual subscription. This model will support all [label types available](./index.md#available-label-types) even if you do not submit any examples for those labels. Model creation can take up to 36 hours.
|
||||
Once you have an initial set of verified images, you can request a model on the Models page. For guidance on choosing a model type, refer to [this part of the documentation](./index.md#available-model-types). If you are unsure which type to request, you can test the base model for each version from the "Base Models" tab. Each model request requires 1 of the 12 trainings that you receive with your annual subscription. This model will support all [label types available](./index.md#available-label-types) even if you do not submit any examples for those labels. Model creation can take up to 36 hours.
|
||||

|
||||
|
||||
## Step 3: Set your model id in the config
|
||||
|
@ -1,52 +0,0 @@
|
||||
---
|
||||
id: improving_model
|
||||
title: Improving your model
|
||||
---
|
||||
|
||||
You may find that Frigate+ models result in more false positives initially, but by submitting true and false positives, the model will improve. With all the new images now being submitted by subscribers, future base models will improve as more and more examples are incorporated. Note that only images with at least one verified label will be used when training your model. Submitting an image from Frigate as a true or false positive will not verify the image. You still must verify the image in Frigate+ in order for it to be used in training.
|
||||
|
||||
- **Submit both true positives and false positives**. This will help the model differentiate between what is and isn't correct. You should aim for a target of 80% true positive submissions and 20% false positives across all of your images. If you are experiencing false positives in a specific area, submitting true positives for any object type near that area in similar lighting conditions will help teach the model what that area looks like when no objects are present.
|
||||
- **Lower your thresholds a little in order to generate more false/true positives near the threshold value**. For example, if you have some false positives that are scoring at 68% and some true positives scoring at 72%, you can try lowering your threshold to 65% and submitting both true and false positives within that range. This will help the model learn and widen the gap between true and false positive scores.
|
||||
- **Submit diverse images**. For the best results, you should provide at least 100 verified images per camera. Keep in mind that varying conditions should be included. You will want images from cloudy days, sunny days, dawn, dusk, and night. As circumstances change, you may need to submit new examples to address new types of false positives. For example, the change from summer days to snowy winter days or other changes such as a new grill or patio furniture may require additional examples and training.
|
||||
|
||||
## Properly labeling images
|
||||
|
||||
For the best results, follow the following guidelines.
|
||||
|
||||
**Label every object in the image**: It is important that you label all objects in each image before verifying. If you don't label a car for example, the model will be taught that part of the image is _not_ a car and it will start to get confused.
|
||||
|
||||
**Make tight bounding boxes**: Tighter bounding boxes improve the recognition and ensure that accurate bounding boxes are predicted at runtime.
|
||||
|
||||
**Label the full object even when occluded**: If you have a person standing behind a car, label the full person even though a portion of their body may be hidden behind the car. This helps predict accurate bounding boxes and improves zone accuracy and filters at runtime. If an object is partly out of frame, label it only when a person would reasonably be able to recognize the object from the visible parts.
|
||||
|
||||
**Label objects hard to identify as difficult**: When objects are truly difficult to make out, such as a car barely visible through a bush, or a dog that is hard to distinguish from the background at night, flag it as 'difficult'. This is not used in the model training as of now, but will in the future.
|
||||
|
||||
**`amazon`, `ups`, and `fedex` should label the logo**: For a Fedex truck, label the truck as a `car` and make a different bounding box just for the Fedex logo. If there are multiple logos, label each of them.
|
||||
|
||||

|
||||
|
||||
## False positive labels
|
||||
|
||||
False positives will be shown with a read box and the label will have a strike through.
|
||||

|
||||
|
||||
Misidentified objects should have a correct label added. For example, if a person was mistakenly detected as a cat, you should submit it as a false positive in Frigate and add a label for the person. The boxes will overlap.
|
||||
|
||||

|
||||
|
||||
## Shortcuts for a faster workflow
|
||||
|
||||
| Shortcut Key | Description |
|
||||
| ----------------- | ----------------------------- |
|
||||
| `?` | Show all keyboard shortcuts |
|
||||
| `w` | Add box |
|
||||
| `d` | Toggle difficult |
|
||||
| `s` | Switch to the next label |
|
||||
| `tab` | Select next largest box |
|
||||
| `del` | Delete current box |
|
||||
| `esc` | Deselect/Cancel |
|
||||
| `← ↑ → ↓` | Move box |
|
||||
| `Shift + ← ↑ → ↓` | Resize box |
|
||||
| `scrollwheel` | Zoom in/out |
|
||||
| `f` | Hide/show all but current box |
|
||||
| `spacebar` | Verify and save |
|
@ -3,23 +3,17 @@ id: index
|
||||
title: Models
|
||||
---
|
||||
|
||||
<a href="https://frigate.video/plus" target="_blank" rel="nofollow">Frigate+</a> offers models trained on images submitted by Frigate+ users from their security cameras and is specifically designed for the way Frigate NVR analyzes video footage. These models offer higher accuracy with less resources. The images you upload are used to fine tune a baseline model trained from images uploaded by all Frigate+ users. This fine tuning process results in a model that is optimized for accuracy in your specific conditions.
|
||||
<a href="https://frigate.video/plus" target="_blank" rel="nofollow">Frigate+</a> offers models trained on images submitted by Frigate+ users from their security cameras and is specifically designed for the way Frigate NVR analyzes video footage. These models offer higher accuracy with less resources. The images you upload are used to fine tune a base model trained from images uploaded by all Frigate+ users. This fine tuning process results in a model that is optimized for accuracy in your specific conditions.
|
||||
|
||||
:::info
|
||||
|
||||
The baseline model isn't directly available after subscribing. This may change in the future, but for now you will need to submit a model request with the minimum number of images.
|
||||
|
||||
:::
|
||||
|
||||
With a subscription, 12 model trainings per year are included. If you cancel your subscription, you will retain access to any trained models. An active subscription is required to submit model requests or purchase additional trainings.
|
||||
With a subscription, 12 model trainings to fine tune your model per year are included. In addition, you will have access to any base models published while your subscription is active. If you cancel your subscription, you will retain access to any trained and base models in your account. An active subscription is required to submit model requests or purchase additional trainings. New base models are published quarterly with target dates of January 15th, April 15th, July 15th, and October 15th.
|
||||
|
||||
Information on how to integrate Frigate+ with Frigate can be found in the [integration docs](../integrations/plus.md).
|
||||
|
||||
## Available model types
|
||||
|
||||
There are two model types offered in Frigate+: `mobiledet` and `yolonas`. Both of these models are object detection models and are trained to detect the same set of labels [listed below](#available-label-types).
|
||||
There are two model types offered in Frigate+, `mobiledet` and `yolonas`. Both of these models are object detection models and are trained to detect the same set of labels [listed below](#available-label-types).
|
||||
|
||||
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types).
|
||||
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types). You can test model types for compatibility and speed on your hardware by using the base models.
|
||||
|
||||
| Model Type | Description |
|
||||
| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
@ -32,27 +26,53 @@ Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVi
|
||||
|
||||
:::warning
|
||||
|
||||
Using Frigate+ models with `onnx` and `rocm` is only available with Frigate 0.15, which is still under development.
|
||||
Using Frigate+ models with `onnx` and `rocm` is only available with Frigate 0.15 and later.
|
||||
|
||||
:::
|
||||
|
||||
| Hardware | Recommended Detector Type | Recommended Model Type |
|
||||
| ---------------------------------------------------------------------------------------------------------------------------- | ------------------------- | ---------------------- |
|
||||
| [CPU](/configuration/object_detectors.md#cpu-detector-not-recommended) | `cpu` | `mobiledet` |
|
||||
| [Coral (all form factors)](/configuration/object_detectors.md#edge-tpu-detector) | `edgetpu` | `mobiledet` |
|
||||
| [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolonas` |
|
||||
| [NVidia GPU](https://deploy-preview-13787--frigate-docs.netlify.app/configuration/object_detectors#onnx)\* | `onnx` | `yolonas` |
|
||||
| [AMD ROCm GPU](https://deploy-preview-13787--frigate-docs.netlify.app/configuration/object_detectors#amdrocm-gpu-detector)\* | `rocm` | `yolonas` |
|
||||
| Hardware | Recommended Detector Type | Recommended Model Type |
|
||||
| -------------------------------------------------------------------------------- | ------------------------- | ---------------------- |
|
||||
| [CPU](/configuration/object_detectors.md#cpu-detector-not-recommended) | `cpu` | `mobiledet` |
|
||||
| [Coral (all form factors)](/configuration/object_detectors.md#edge-tpu-detector) | `edgetpu` | `mobiledet` |
|
||||
| [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolonas` |
|
||||
| [NVidia GPU](/configuration/object_detectors#onnx)\* | `onnx` | `yolonas` |
|
||||
| [AMD ROCm GPU](/configuration/object_detectors#amdrocm-gpu-detector)\* | `rocm` | `yolonas` |
|
||||
|
||||
_\* Requires Frigate 0.15_
|
||||
|
||||
## Improving your model
|
||||
|
||||
Some users may find that Frigate+ models result in more false positives initially, but by submitting true and false positives, the model will improve. With all the new images now being submitted by subscribers, future base models will improve as more and more examples are incorporated. Note that only images with at least one verified label will be used when training your model. Submitting an image from Frigate as a true or false positive will not verify the image. You still must verify the image in Frigate+ in order for it to be used in training.
|
||||
|
||||
- **Submit both true positives and false positives**. This will help the model differentiate between what is and isn't correct. You should aim for a target of 80% true positive submissions and 20% false positives across all of your images. If you are experiencing false positives in a specific area, submitting true positives for any object type near that area in similar lighting conditions will help teach the model what that area looks like when no objects are present.
|
||||
- **Lower your thresholds a little in order to generate more false/true positives near the threshold value**. For example, if you have some false positives that are scoring at 68% and some true positives scoring at 72%, you can try lowering your threshold to 65% and submitting both true and false positives within that range. This will help the model learn and widen the gap between true and false positive scores.
|
||||
- **Submit diverse images**. For the best results, you should provide at least 100 verified images per camera. Keep in mind that varying conditions should be included. You will want images from cloudy days, sunny days, dawn, dusk, and night. As circumstances change, you may need to submit new examples to address new types of false positives. For example, the change from summer days to snowy winter days or other changes such as a new grill or patio furniture may require additional examples and training.
|
||||
|
||||
## Available label types
|
||||
|
||||
Frigate+ models support a more relevant set of objects for security cameras. Currently, only the following objects are supported: `person`, `face`, `car`, `license_plate`, `amazon`, `ups`, `fedex`, `package`, `dog`, `cat`, `deer`. Other object types available in the default Frigate model are not available. Additional object types will be added in future releases.
|
||||
Frigate+ models support a more relevant set of objects for security cameras. The labels for annotation in Frigate+ are configurable by editing the camera in the Cameras section of Frigate+. Currently, the following objects are supported:
|
||||
|
||||
- **People**: `person`, `face`
|
||||
- **Vehicles**: `car`, `motorcycle`, `bicycle`, `boat`, `school_bus`, `license_plate`
|
||||
- **Delivery Logos**: `amazon`, `usps`, `ups`, `fedex`, `dhl`, `an_post`, `purolator`, `postnl`, `nzpost`, `postnord`, `gls`, `dpd`, `canada_post`, `royal_mail`
|
||||
- **Animals**: `dog`, `cat`, `deer`, `horse`, `bird`, `raccoon`, `fox`, `bear`, `cow`, `squirrel`, `goat`, `rabbit`, `skunk`, `kangaroo`
|
||||
- **Other**: `package`, `waste_bin`, `bbq_grill`, `robot_lawnmower`, `umbrella`
|
||||
|
||||
Other object types available in the default Frigate model are not available. Additional object types will be added in future releases.
|
||||
|
||||
### Candidate labels
|
||||
|
||||
Candidate labels are also available for annotation. These labels don't have enough data to be included in the model yet, but using them will help add support sooner. You can enable these labels by editing the camera settings.
|
||||
|
||||
Where possible, these labels are mapped to existing labels during training. For example, any `baby` labels are mapped to `person` until support for new labels is added.
|
||||
|
||||
The candidate labels are: `baby`, `bpost`, `badger`, `possum`, `rodent`, `chicken`, `groundhog`, `boar`, `hedgehog`, `tractor`, `golf cart`, `garbage truck`, `bus`, `sports ball`
|
||||
|
||||
Candidate labels are not available for automatic suggestions.
|
||||
|
||||
### Label attributes
|
||||
|
||||
Frigate has special handling for some labels when using Frigate+ models. `face`, `license_plate`, `amazon`, `ups`, and `fedex` are considered attribute labels which are not tracked like regular objects and do not generate review items directly. In addition, the `threshold` filter will have no effect on these labels. You should adjust the `min_score` and other filter values as needed.
|
||||
Frigate has special handling for some labels when using Frigate+ models. `face`, `license_plate`, and delivery logos such as `amazon`, `ups`, and `fedex` are considered attribute labels which are not tracked like regular objects and do not generate review items directly. In addition, the `threshold` filter will have no effect on these labels. You should adjust the `min_score` and other filter values as needed.
|
||||
|
||||
In order to have Frigate start using these attribute labels, you will need to add them to the list of objects to track:
|
||||
|
||||
@ -75,6 +95,6 @@ When using Frigate+ models, Frigate will choose the snapshot of a person object
|
||||
|
||||

|
||||
|
||||
`amazon`, `ups`, and `fedex` labels are used to automatically assign a sub label to car objects.
|
||||
Delivery logos such as `amazon`, `ups`, and `fedex` labels are used to automatically assign a sub label to car objects.
|
||||
|
||||

|
||||
|
@ -40,6 +40,17 @@ Some users have reported that this older device runs an older kernel causing iss
|
||||
6. Open the control panel - info scree. The coral TPU will now be recognised as a USB Device - google inc
|
||||
7. Start the frigate container. Everything should work now!
|
||||
|
||||
### QNAP NAS
|
||||
|
||||
QNAP NAS devices, such as the TS-253A, may use connected Coral TPU devices if [QuMagie](https://www.qnap.com/en/software/qumagie) is installed along with its QNAP AI Core extension. If any of the features—`facial recognition`, `object recognition`, or `similar photo recognition`—are enabled, Container Station applications such as `Frigate` or `CodeProject.AI Server` will be unable to initialize the TPU device in use.
|
||||
To allow the Coral TPU device to be discovered, the you must either:
|
||||
|
||||
1. [Disable the AI recognition features in QuMagie](https://docs.qnap.com/application/qumagie/2.x/en-us/configuring-qnap-ai-core-settings-FB13CE03.html),
|
||||
2. Remove the QNAP AI Core extension or
|
||||
3. Manually start the QNAP AI Core extension after Frigate has fully started (not recommended).
|
||||
|
||||
It is also recommended to restart the NAS once the changes have been made.
|
||||
|
||||
## USB Coral Detection Appears to be Stuck
|
||||
|
||||
The USB Coral can become stuck and need to be restarted, this can happen for a number of reasons depending on hardware and software setup. Some common reasons are:
|
||||
@ -54,6 +65,17 @@ The most common reason for the PCIe Coral not being detected is that the driver
|
||||
- In most cases [the Coral docs](https://coral.ai/docs/m2/get-started/#2-install-the-pcie-driver-and-edge-tpu-runtime) show how to install the driver for the PCIe based Coral.
|
||||
- For Ubuntu 22.04+ https://github.com/jnicolson/gasket-builder can be used to build and install the latest version of the driver.
|
||||
|
||||
### Not detected on Raspberry Pi5
|
||||
|
||||
A kernel update to the RPi5 means an upate to config.txt is required, see [the raspberry pi forum for more info](https://forums.raspberrypi.com/viewtopic.php?t=363682&sid=cb59b026a412f0dc041595951273a9ca&start=25)
|
||||
|
||||
Specifically, add the following to config.txt
|
||||
|
||||
```
|
||||
dtoverlay=pciex1-compat-pi5,no-mip
|
||||
dtoverlay=pcie-32bit-dma-pi5
|
||||
```
|
||||
|
||||
## Only One PCIe Coral Is Detected With Coral Dual EdgeTPU
|
||||
|
||||
Coral Dual EdgeTPU is one card with two identical TPU cores. Each core has it's own PCIe interface and motherboard needs to have two PCIe busses on the m.2 slot to make them both work.
|
||||
|
@ -17,6 +17,10 @@ ffmpeg:
|
||||
record: preset-record-generic-audio-aac
|
||||
```
|
||||
|
||||
### How can I get sound in live view?
|
||||
|
||||
Audio is only supported for live view when go2rtc is configured, see [the live docs](../configuration/live.md) for more information.
|
||||
|
||||
### I can't view recordings in the Web UI.
|
||||
|
||||
Ensure your cameras send h264 encoded video, or [transcode them](/configuration/restream.md).
|
||||
|
@ -1,56 +1,101 @@
|
||||
import type * as Preset from '@docusaurus/preset-classic';
|
||||
import * as path from 'node:path';
|
||||
import type { Config, PluginConfig } from '@docusaurus/types';
|
||||
import type * as OpenApiPlugin from 'docusaurus-plugin-openapi-docs';
|
||||
import type * as Preset from "@docusaurus/preset-classic";
|
||||
import * as path from "node:path";
|
||||
import type { Config, PluginConfig } from "@docusaurus/types";
|
||||
import type * as OpenApiPlugin from "docusaurus-plugin-openapi-docs";
|
||||
|
||||
const config: Config = {
|
||||
title: 'Frigate',
|
||||
tagline: 'NVR With Realtime Object Detection for IP Cameras',
|
||||
url: 'https://docs.frigate.video',
|
||||
baseUrl: '/',
|
||||
onBrokenLinks: 'throw',
|
||||
onBrokenMarkdownLinks: 'warn',
|
||||
favicon: 'img/favicon.ico',
|
||||
organizationName: 'blakeblackshear',
|
||||
projectName: 'frigate',
|
||||
themes: ['@docusaurus/theme-mermaid', 'docusaurus-theme-openapi-docs'],
|
||||
title: "Frigate",
|
||||
tagline: "NVR With Realtime Object Detection for IP Cameras",
|
||||
url: "https://docs.frigate.video",
|
||||
baseUrl: "/",
|
||||
onBrokenLinks: "throw",
|
||||
onBrokenMarkdownLinks: "warn",
|
||||
favicon: "img/favicon.ico",
|
||||
organizationName: "blakeblackshear",
|
||||
projectName: "frigate",
|
||||
themes: [
|
||||
"@docusaurus/theme-mermaid",
|
||||
"docusaurus-theme-openapi-docs",
|
||||
"@inkeep/docusaurus/chatButton",
|
||||
"@inkeep/docusaurus/searchBar",
|
||||
],
|
||||
markdown: {
|
||||
mermaid: true,
|
||||
},
|
||||
themeConfig: {
|
||||
algolia: {
|
||||
appId: 'WIURGBNBPY',
|
||||
apiKey: 'd02cc0a6a61178b25da550212925226b',
|
||||
indexName: 'frigate',
|
||||
announcementBar: {
|
||||
id: 'frigate_plus',
|
||||
content: `
|
||||
<span style="margin-right: 8px; display: inline-block; animation: pulse 2s infinite;">🚀</span>
|
||||
Get more relevant and accurate detections with Frigate+ models.
|
||||
<a style="margin-left: 12px; padding: 3px 10px; background: #94d2bd; color: #001219; text-decoration: none; border-radius: 4px; font-weight: 500; " target="_blank" rel="noopener noreferrer" href="https://frigate.video/plus/">Learn more</a>
|
||||
<span style="margin-left: 8px; display: inline-block; animation: pulse 2s infinite;">✨</span>
|
||||
<style>
|
||||
@keyframes pulse {
|
||||
0%, 100% { transform: scale(1); }
|
||||
50% { transform: scale(1.1); }
|
||||
}
|
||||
</style>`,
|
||||
backgroundColor: '#005f73',
|
||||
textColor: '#e0fbfc',
|
||||
isCloseable: false,
|
||||
},
|
||||
docs: {
|
||||
sidebar: {
|
||||
hideable: true,
|
||||
},
|
||||
},
|
||||
inkeepConfig: {
|
||||
baseSettings: {
|
||||
apiKey: "b1a4c4d73c9b48aa5b3cdae6e4c81f0bb3d1134eeb5a7100",
|
||||
integrationId: "cm6xmhn9h000gs601495fkkdx",
|
||||
organizationId: "org_map2JQEOco8U1ZYY",
|
||||
primaryBrandColor: "#010101",
|
||||
},
|
||||
aiChatSettings: {
|
||||
chatSubjectName: "Frigate",
|
||||
botAvatarSrcUrl: "https://frigate.video/images/favicon.png",
|
||||
getHelpCallToActions: [
|
||||
{
|
||||
name: "GitHub",
|
||||
url: "https://github.com/blakeblackshear/frigate",
|
||||
icon: {
|
||||
builtIn: "FaGithub",
|
||||
},
|
||||
},
|
||||
],
|
||||
quickQuestions: [
|
||||
"How to configure and setup camera settings?",
|
||||
"How to setup notifications?",
|
||||
"Supported builtin detectors?",
|
||||
"How to restream video feed?",
|
||||
"How can I get sound or audio in my recordings?",
|
||||
],
|
||||
},
|
||||
},
|
||||
prism: {
|
||||
additionalLanguages: ['bash', 'json'],
|
||||
additionalLanguages: ["bash", "json"],
|
||||
},
|
||||
languageTabs: [
|
||||
{
|
||||
highlight: 'python',
|
||||
language: 'python',
|
||||
logoClass: 'python',
|
||||
highlight: "python",
|
||||
language: "python",
|
||||
logoClass: "python",
|
||||
},
|
||||
{
|
||||
highlight: 'javascript',
|
||||
language: 'nodejs',
|
||||
logoClass: 'nodejs',
|
||||
highlight: "javascript",
|
||||
language: "nodejs",
|
||||
logoClass: "nodejs",
|
||||
},
|
||||
{
|
||||
highlight: 'javascript',
|
||||
language: 'javascript',
|
||||
logoClass: 'javascript',
|
||||
highlight: "javascript",
|
||||
language: "javascript",
|
||||
logoClass: "javascript",
|
||||
},
|
||||
{
|
||||
highlight: 'bash',
|
||||
language: 'curl',
|
||||
logoClass: 'curl',
|
||||
highlight: "bash",
|
||||
language: "curl",
|
||||
logoClass: "curl",
|
||||
},
|
||||
{
|
||||
highlight: "rust",
|
||||
@ -59,49 +104,49 @@ const config: Config = {
|
||||
},
|
||||
],
|
||||
navbar: {
|
||||
title: 'Frigate',
|
||||
title: "Frigate",
|
||||
logo: {
|
||||
alt: 'Frigate',
|
||||
src: 'img/logo.svg',
|
||||
srcDark: 'img/logo-dark.svg',
|
||||
alt: "Frigate",
|
||||
src: "img/logo.svg",
|
||||
srcDark: "img/logo-dark.svg",
|
||||
},
|
||||
items: [
|
||||
{
|
||||
to: '/',
|
||||
activeBasePath: 'docs',
|
||||
label: 'Docs',
|
||||
position: 'left',
|
||||
to: "/",
|
||||
activeBasePath: "docs",
|
||||
label: "Docs",
|
||||
position: "left",
|
||||
},
|
||||
{
|
||||
href: 'https://frigate.video',
|
||||
label: 'Website',
|
||||
position: 'right',
|
||||
href: "https://frigate.video",
|
||||
label: "Website",
|
||||
position: "right",
|
||||
},
|
||||
{
|
||||
href: 'http://demo.frigate.video',
|
||||
label: 'Demo',
|
||||
position: 'right',
|
||||
href: "http://demo.frigate.video",
|
||||
label: "Demo",
|
||||
position: "right",
|
||||
},
|
||||
{
|
||||
href: 'https://github.com/blakeblackshear/frigate',
|
||||
label: 'GitHub',
|
||||
position: 'right',
|
||||
href: "https://github.com/blakeblackshear/frigate",
|
||||
label: "GitHub",
|
||||
position: "right",
|
||||
},
|
||||
],
|
||||
},
|
||||
footer: {
|
||||
style: 'dark',
|
||||
style: "dark",
|
||||
links: [
|
||||
{
|
||||
title: 'Community',
|
||||
title: "Community",
|
||||
items: [
|
||||
{
|
||||
label: 'GitHub',
|
||||
href: 'https://github.com/blakeblackshear/frigate',
|
||||
label: "GitHub",
|
||||
href: "https://github.com/blakeblackshear/frigate",
|
||||
},
|
||||
{
|
||||
label: 'Discussions',
|
||||
href: 'https://github.com/blakeblackshear/frigate/discussions',
|
||||
label: "Discussions",
|
||||
href: "https://github.com/blakeblackshear/frigate/discussions",
|
||||
},
|
||||
],
|
||||
},
|
||||
@ -110,19 +155,19 @@ const config: Config = {
|
||||
},
|
||||
},
|
||||
plugins: [
|
||||
path.resolve(__dirname, 'plugins', 'raw-loader'),
|
||||
path.resolve(__dirname, "plugins", "raw-loader"),
|
||||
[
|
||||
'docusaurus-plugin-openapi-docs',
|
||||
"docusaurus-plugin-openapi-docs",
|
||||
{
|
||||
id: 'openapi',
|
||||
docsPluginId: 'classic', // configured for preset-classic
|
||||
id: "openapi",
|
||||
docsPluginId: "classic", // configured for preset-classic
|
||||
config: {
|
||||
frigateApi: {
|
||||
specPath: 'static/frigate-api.yaml',
|
||||
outputDir: 'docs/integrations/api',
|
||||
specPath: "static/frigate-api.yaml",
|
||||
outputDir: "docs/integrations/api",
|
||||
sidebarOptions: {
|
||||
groupPathsBy: 'tag',
|
||||
categoryLinkSource: 'tag',
|
||||
groupPathsBy: "tag",
|
||||
categoryLinkSource: "tag",
|
||||
sidebarCollapsible: true,
|
||||
sidebarCollapsed: true,
|
||||
},
|
||||
@ -130,23 +175,24 @@ const config: Config = {
|
||||
} satisfies OpenApiPlugin.Options,
|
||||
},
|
||||
},
|
||||
]
|
||||
],
|
||||
] as PluginConfig[],
|
||||
presets: [
|
||||
[
|
||||
'classic',
|
||||
"classic",
|
||||
{
|
||||
docs: {
|
||||
routeBasePath: '/',
|
||||
sidebarPath: './sidebars.ts',
|
||||
routeBasePath: "/",
|
||||
sidebarPath: "./sidebars.ts",
|
||||
// Please change this to your repo.
|
||||
editUrl: 'https://github.com/blakeblackshear/frigate/edit/master/docs/',
|
||||
editUrl:
|
||||
"https://github.com/blakeblackshear/frigate/edit/master/docs/",
|
||||
sidebarCollapsible: false,
|
||||
docItemComponent: '@theme/ApiItem', // Derived from docusaurus-theme-openapi
|
||||
docItemComponent: "@theme/ApiItem", // Derived from docusaurus-theme-openapi
|
||||
},
|
||||
|
||||
theme: {
|
||||
customCss: './src/css/custom.css',
|
||||
customCss: "./src/css/custom.css",
|
||||
},
|
||||
} satisfies Preset.Options,
|
||||
],
|
||||
|
7
docs/package-lock.json
generated
7
docs/package-lock.json
generated
@ -12,6 +12,7 @@
|
||||
"@docusaurus/plugin-content-docs": "^3.6.3",
|
||||
"@docusaurus/preset-classic": "^3.6.3",
|
||||
"@docusaurus/theme-mermaid": "^3.6.3",
|
||||
"@inkeep/docusaurus": "^2.0.16",
|
||||
"@mdx-js/react": "^3.1.0",
|
||||
"clsx": "^2.1.1",
|
||||
"docusaurus-plugin-openapi-docs": "^4.3.1",
|
||||
@ -4056,6 +4057,12 @@
|
||||
"react-hook-form": "^7.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@inkeep/docusaurus": {
|
||||
"version": "2.0.16",
|
||||
"resolved": "https://registry.npmjs.org/@inkeep/docusaurus/-/docusaurus-2.0.16.tgz",
|
||||
"integrity": "sha512-dQhjlvFnl3CVr0gWeJ/V/qLnDy1XYrCfkdVSa2D3gJTxI9/vOf9639Y1aPxTxO88DiXuW9CertLrZLB6SoJ2yg==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/@isaacs/cliui": {
|
||||
"version": "8.0.2",
|
||||
"resolved": "https://registry.npmjs.org/@isaacs/cliui/-/cliui-8.0.2.tgz",
|
||||
|
@ -18,9 +18,10 @@
|
||||
},
|
||||
"dependencies": {
|
||||
"@docusaurus/core": "^3.6.3",
|
||||
"@docusaurus/plugin-content-docs": "^3.6.3",
|
||||
"@docusaurus/preset-classic": "^3.6.3",
|
||||
"@docusaurus/theme-mermaid": "^3.6.3",
|
||||
"@docusaurus/plugin-content-docs": "^3.6.3",
|
||||
"@inkeep/docusaurus": "^2.0.16",
|
||||
"@mdx-js/react": "^3.1.0",
|
||||
"clsx": "^2.1.1",
|
||||
"docusaurus-plugin-openapi-docs": "^4.3.1",
|
||||
|
@ -8,6 +8,7 @@ const sidebars: SidebarsConfig = {
|
||||
'frigate/index',
|
||||
'frigate/hardware',
|
||||
'frigate/installation',
|
||||
'frigate/updating',
|
||||
'frigate/camera_setup',
|
||||
'frigate/video_pipeline',
|
||||
'frigate/glossary',
|
||||
@ -86,8 +87,8 @@ const sidebars: SidebarsConfig = {
|
||||
],
|
||||
'Frigate+': [
|
||||
'plus/index',
|
||||
'plus/annotating',
|
||||
'plus/first_model',
|
||||
'plus/improving_model',
|
||||
'plus/faq',
|
||||
],
|
||||
Troubleshooting: [
|
||||
|
BIN
docs/static/img/plus/suggestions.webp
vendored
Normal file
BIN
docs/static/img/plus/suggestions.webp
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 71 KiB |
@ -490,8 +490,6 @@ def set_not_reviewed(review_id: str):
|
||||
review.save()
|
||||
|
||||
return JSONResponse(
|
||||
content=(
|
||||
{"success": True, "message": "Set Review " + review_id + " as not viewed"}
|
||||
),
|
||||
content=({"success": True, "message": f"Set Review {review_id} as not viewed"}),
|
||||
status_code=200,
|
||||
)
|
||||
|
@ -71,6 +71,7 @@ from frigate.timeline import TimelineProcessor
|
||||
from frigate.util.builtin import empty_and_close_queue
|
||||
from frigate.util.image import SharedMemoryFrameManager, UntrackedSharedMemory
|
||||
from frigate.util.object import get_camera_regions_grid
|
||||
from frigate.util.services import set_file_limit
|
||||
from frigate.version import VERSION
|
||||
from frigate.video import capture_camera, track_camera
|
||||
from frigate.watchdog import FrigateWatchdog
|
||||
@ -587,6 +588,9 @@ class FrigateApp:
|
||||
# Ensure global state.
|
||||
self.ensure_dirs()
|
||||
|
||||
# Set soft file limits.
|
||||
set_file_limit()
|
||||
|
||||
# Start frigate services.
|
||||
self.init_camera_metrics()
|
||||
self.init_queues()
|
||||
|
@ -473,7 +473,7 @@ class CameraState:
|
||||
|
||||
if current_frame is not None:
|
||||
self.current_frame_time = frame_time
|
||||
self._current_frame = current_frame
|
||||
self._current_frame = np.copy(current_frame)
|
||||
|
||||
if self.previous_frame_id is not None:
|
||||
self.frame_manager.close(self.previous_frame_id)
|
||||
|
@ -6,6 +6,7 @@ import unittest
|
||||
from peewee_migrate import Router
|
||||
from playhouse.sqlite_ext import SqliteExtDatabase
|
||||
from playhouse.sqliteq import SqliteQueueDatabase
|
||||
from pydantic import Json
|
||||
|
||||
from frigate.api.fastapi_app import create_fastapi_app
|
||||
from frigate.config import FrigateConfig
|
||||
@ -123,7 +124,12 @@ class BaseTestHttp(unittest.TestCase):
|
||||
def insert_mock_event(
|
||||
self,
|
||||
id: str,
|
||||
start_time: datetime.datetime = datetime.datetime.now().timestamp(),
|
||||
start_time: float = datetime.datetime.now().timestamp(),
|
||||
end_time: float = datetime.datetime.now().timestamp() + 20,
|
||||
has_clip: bool = True,
|
||||
top_score: int = 100,
|
||||
score: int = 0,
|
||||
data: Json = {},
|
||||
) -> Event:
|
||||
"""Inserts a basic event model with a given id."""
|
||||
return Event.insert(
|
||||
@ -131,16 +137,18 @@ class BaseTestHttp(unittest.TestCase):
|
||||
label="Mock",
|
||||
camera="front_door",
|
||||
start_time=start_time,
|
||||
end_time=start_time + 20,
|
||||
top_score=100,
|
||||
end_time=end_time,
|
||||
top_score=top_score,
|
||||
score=score,
|
||||
false_positive=False,
|
||||
zones=list(),
|
||||
thumbnail="",
|
||||
region=[],
|
||||
box=[],
|
||||
area=0,
|
||||
has_clip=True,
|
||||
has_clip=has_clip,
|
||||
has_snapshot=True,
|
||||
data=data,
|
||||
).execute()
|
||||
|
||||
def insert_mock_review_segment(
|
||||
@ -150,6 +158,7 @@ class BaseTestHttp(unittest.TestCase):
|
||||
end_time: float = datetime.datetime.now().timestamp() + 20,
|
||||
severity: SeverityEnum = SeverityEnum.alert,
|
||||
has_been_reviewed: bool = False,
|
||||
data: Json = {},
|
||||
) -> Event:
|
||||
"""Inserts a review segment model with a given id."""
|
||||
return ReviewSegment.insert(
|
||||
@ -160,7 +169,7 @@ class BaseTestHttp(unittest.TestCase):
|
||||
has_been_reviewed=has_been_reviewed,
|
||||
severity=severity,
|
||||
thumb_path=False,
|
||||
data={},
|
||||
data=data,
|
||||
).execute()
|
||||
|
||||
def insert_mock_recording(
|
||||
@ -168,6 +177,7 @@ class BaseTestHttp(unittest.TestCase):
|
||||
id: str,
|
||||
start_time: float = datetime.datetime.now().timestamp(),
|
||||
end_time: float = datetime.datetime.now().timestamp() + 20,
|
||||
motion: int = 0,
|
||||
) -> Event:
|
||||
"""Inserts a recording model with a given id."""
|
||||
return Recordings.insert(
|
||||
@ -177,4 +187,5 @@ class BaseTestHttp(unittest.TestCase):
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
duration=end_time - start_time,
|
||||
motion=motion,
|
||||
).execute()
|
||||
|
26
frigate/test/http_api/test_http_app.py
Normal file
26
frigate/test/http_api/test_http_app.py
Normal file
@ -0,0 +1,26 @@
|
||||
from unittest.mock import Mock
|
||||
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
from frigate.models import Event, Recordings, ReviewSegment
|
||||
from frigate.stats.emitter import StatsEmitter
|
||||
from frigate.test.http_api.base_http_test import BaseTestHttp
|
||||
|
||||
|
||||
class TestHttpApp(BaseTestHttp):
|
||||
def setUp(self):
|
||||
super().setUp([Event, Recordings, ReviewSegment])
|
||||
self.app = super().create_app()
|
||||
|
||||
####################################################################################################################
|
||||
################################### GET /stats Endpoint #########################################################
|
||||
####################################################################################################################
|
||||
def test_stats_endpoint(self):
|
||||
stats = Mock(spec=StatsEmitter)
|
||||
stats.get_latest_stats.return_value = self.test_stats
|
||||
app = super().create_app(stats)
|
||||
|
||||
with TestClient(app) as client:
|
||||
response = client.get("/stats")
|
||||
response_json = response.json()
|
||||
assert response_json == self.test_stats
|
137
frigate/test/http_api/test_http_event.py
Normal file
137
frigate/test/http_api/test_http_event.py
Normal file
@ -0,0 +1,137 @@
|
||||
from datetime import datetime
|
||||
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
from frigate.models import Event, Recordings, ReviewSegment
|
||||
from frigate.test.http_api.base_http_test import BaseTestHttp
|
||||
|
||||
|
||||
class TestHttpApp(BaseTestHttp):
|
||||
def setUp(self):
|
||||
super().setUp([Event, Recordings, ReviewSegment])
|
||||
self.app = super().create_app()
|
||||
|
||||
####################################################################################################################
|
||||
################################### GET /events Endpoint #########################################################
|
||||
####################################################################################################################
|
||||
def test_get_event_list_no_events(self):
|
||||
with TestClient(self.app) as client:
|
||||
events = client.get("/events").json()
|
||||
assert len(events) == 0
|
||||
|
||||
def test_get_event_list_no_match_event_id(self):
|
||||
id = "123456.random"
|
||||
with TestClient(self.app) as client:
|
||||
super().insert_mock_event(id)
|
||||
events = client.get("/events", params={"event_id": "abc"}).json()
|
||||
assert len(events) == 0
|
||||
|
||||
def test_get_event_list_match_event_id(self):
|
||||
id = "123456.random"
|
||||
with TestClient(self.app) as client:
|
||||
super().insert_mock_event(id)
|
||||
events = client.get("/events", params={"event_id": id}).json()
|
||||
assert len(events) == 1
|
||||
assert events[0]["id"] == id
|
||||
|
||||
def test_get_event_list_match_length(self):
|
||||
now = int(datetime.now().timestamp())
|
||||
|
||||
id = "123456.random"
|
||||
with TestClient(self.app) as client:
|
||||
super().insert_mock_event(id, now, now + 1)
|
||||
events = client.get(
|
||||
"/events", params={"max_length": 1, "min_length": 1}
|
||||
).json()
|
||||
assert len(events) == 1
|
||||
assert events[0]["id"] == id
|
||||
|
||||
def test_get_event_list_no_match_max_length(self):
|
||||
now = int(datetime.now().timestamp())
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_event(id, now, now + 2)
|
||||
events = client.get("/events", params={"max_length": 1}).json()
|
||||
assert len(events) == 0
|
||||
|
||||
def test_get_event_list_no_match_min_length(self):
|
||||
now = int(datetime.now().timestamp())
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_event(id, now, now + 2)
|
||||
events = client.get("/events", params={"min_length": 3}).json()
|
||||
assert len(events) == 0
|
||||
|
||||
def test_get_event_list_limit(self):
|
||||
id = "123456.random"
|
||||
id2 = "54321.random"
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
super().insert_mock_event(id)
|
||||
events = client.get("/events").json()
|
||||
assert len(events) == 1
|
||||
assert events[0]["id"] == id
|
||||
|
||||
super().insert_mock_event(id2)
|
||||
events = client.get("/events").json()
|
||||
assert len(events) == 2
|
||||
|
||||
events = client.get("/events", params={"limit": 1}).json()
|
||||
assert len(events) == 1
|
||||
assert events[0]["id"] == id
|
||||
|
||||
events = client.get("/events", params={"limit": 3}).json()
|
||||
assert len(events) == 2
|
||||
|
||||
def test_get_event_list_no_match_has_clip(self):
|
||||
now = int(datetime.now().timestamp())
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_event(id, now, now + 2)
|
||||
events = client.get("/events", params={"has_clip": 0}).json()
|
||||
assert len(events) == 0
|
||||
|
||||
def test_get_event_list_has_clip(self):
|
||||
with TestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_event(id, has_clip=True)
|
||||
events = client.get("/events", params={"has_clip": 1}).json()
|
||||
assert len(events) == 1
|
||||
assert events[0]["id"] == id
|
||||
|
||||
def test_get_event_list_sort_score(self):
|
||||
with TestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
id2 = "54321.random"
|
||||
super().insert_mock_event(id, top_score=37, score=37, data={"score": 50})
|
||||
super().insert_mock_event(id2, top_score=47, score=47, data={"score": 20})
|
||||
events = client.get("/events", params={"sort": "score_asc"}).json()
|
||||
assert len(events) == 2
|
||||
assert events[0]["id"] == id2
|
||||
assert events[1]["id"] == id
|
||||
|
||||
events = client.get("/events", params={"sort": "score_des"}).json()
|
||||
assert len(events) == 2
|
||||
assert events[0]["id"] == id
|
||||
assert events[1]["id"] == id2
|
||||
|
||||
def test_get_event_list_sort_start_time(self):
|
||||
now = int(datetime.now().timestamp())
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
id2 = "54321.random"
|
||||
super().insert_mock_event(id, start_time=now + 3)
|
||||
super().insert_mock_event(id2, start_time=now)
|
||||
events = client.get("/events", params={"sort": "date_asc"}).json()
|
||||
assert len(events) == 2
|
||||
assert events[0]["id"] == id2
|
||||
assert events[1]["id"] == id
|
||||
|
||||
events = client.get("/events", params={"sort": "date_desc"}).json()
|
||||
assert len(events) == 2
|
||||
assert events[0]["id"] == id
|
||||
assert events[1]["id"] == id2
|
@ -569,3 +569,177 @@ class TestHttpReview(BaseTestHttp):
|
||||
recording_ids_in_db_after = self._get_recordings(ids)
|
||||
assert len(review_ids_in_db_after) == 0
|
||||
assert len(recording_ids_in_db_after) == 0
|
||||
|
||||
####################################################################################################################
|
||||
################################### GET /review/activity/motion Endpoint ########################################
|
||||
####################################################################################################################
|
||||
def test_review_activity_motion_no_data_for_time_range(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
params = {
|
||||
"after": now,
|
||||
"before": now + 3,
|
||||
}
|
||||
response = client.get("/review/activity/motion", params=params)
|
||||
assert response.status_code == 200
|
||||
response_json = response.json()
|
||||
assert len(response_json) == 0
|
||||
|
||||
def test_review_activity_motion(self):
|
||||
now = int(datetime.now().timestamp())
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
one_m = int((datetime.now() + timedelta(minutes=1)).timestamp())
|
||||
id = "123456.random"
|
||||
id2 = "123451.random"
|
||||
super().insert_mock_recording(id, now + 1, now + 2, motion=101)
|
||||
super().insert_mock_recording(id2, one_m + 1, one_m + 2, motion=200)
|
||||
params = {
|
||||
"after": now,
|
||||
"before": one_m + 3,
|
||||
"scale": 1,
|
||||
}
|
||||
response = client.get("/review/activity/motion", params=params)
|
||||
assert response.status_code == 200
|
||||
response_json = response.json()
|
||||
assert len(response_json) == 61
|
||||
self.assertDictEqual(
|
||||
{"motion": 50.5, "camera": "front_door", "start_time": now + 1},
|
||||
response_json[0],
|
||||
)
|
||||
for item in response_json[1:-1]:
|
||||
self.assertDictEqual(
|
||||
{"motion": 0.0, "camera": "", "start_time": item["start_time"]},
|
||||
item,
|
||||
)
|
||||
self.assertDictEqual(
|
||||
{"motion": 100.0, "camera": "front_door", "start_time": one_m + 1},
|
||||
response_json[len(response_json) - 1],
|
||||
)
|
||||
|
||||
####################################################################################################################
|
||||
################################### GET /review/event/{event_id} Endpoint #######################################
|
||||
####################################################################################################################
|
||||
def test_review_event_not_found(self):
|
||||
with TestClient(self.app) as client:
|
||||
response = client.get("/review/event/123456.random")
|
||||
assert response.status_code == 404
|
||||
response_json = response.json()
|
||||
self.assertDictEqual(
|
||||
{"success": False, "message": "Review item not found"},
|
||||
response_json,
|
||||
)
|
||||
|
||||
def test_review_event_not_found_in_data(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
id = "123456.random"
|
||||
super().insert_mock_review_segment(id, now + 1, now + 2)
|
||||
response = client.get(f"/review/event/{id}")
|
||||
assert response.status_code == 404
|
||||
response_json = response.json()
|
||||
self.assertDictEqual(
|
||||
{"success": False, "message": "Review item not found"},
|
||||
response_json,
|
||||
)
|
||||
|
||||
def test_review_get_specific_event(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
event_id = "123456.event.random"
|
||||
super().insert_mock_event(event_id)
|
||||
review_id = "123456.review.random"
|
||||
super().insert_mock_review_segment(
|
||||
review_id, now + 1, now + 2, data={"detections": {"event_id": event_id}}
|
||||
)
|
||||
response = client.get(f"/review/event/{event_id}")
|
||||
assert response.status_code == 200
|
||||
response_json = response.json()
|
||||
self.assertDictEqual(
|
||||
{
|
||||
"id": review_id,
|
||||
"camera": "front_door",
|
||||
"start_time": now + 1,
|
||||
"end_time": now + 2,
|
||||
"has_been_reviewed": False,
|
||||
"severity": SeverityEnum.alert,
|
||||
"thumb_path": "False",
|
||||
"data": {"detections": {"event_id": event_id}},
|
||||
},
|
||||
response_json,
|
||||
)
|
||||
|
||||
####################################################################################################################
|
||||
################################### GET /review/{review_id} Endpoint #######################################
|
||||
####################################################################################################################
|
||||
def test_review_not_found(self):
|
||||
with TestClient(self.app) as client:
|
||||
response = client.get("/review/123456.random")
|
||||
assert response.status_code == 404
|
||||
response_json = response.json()
|
||||
self.assertDictEqual(
|
||||
{"success": False, "message": "Review item not found"},
|
||||
response_json,
|
||||
)
|
||||
|
||||
def test_get_review(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
review_id = "123456.review.random"
|
||||
super().insert_mock_review_segment(review_id, now + 1, now + 2)
|
||||
response = client.get(f"/review/{review_id}")
|
||||
assert response.status_code == 200
|
||||
response_json = response.json()
|
||||
self.assertDictEqual(
|
||||
{
|
||||
"id": review_id,
|
||||
"camera": "front_door",
|
||||
"start_time": now + 1,
|
||||
"end_time": now + 2,
|
||||
"has_been_reviewed": False,
|
||||
"severity": SeverityEnum.alert,
|
||||
"thumb_path": "False",
|
||||
"data": {},
|
||||
},
|
||||
response_json,
|
||||
)
|
||||
|
||||
####################################################################################################################
|
||||
################################### DELETE /review/{review_id}/viewed Endpoint ##################################
|
||||
####################################################################################################################
|
||||
def test_delete_review_viewed_review_not_found(self):
|
||||
with TestClient(self.app) as client:
|
||||
review_id = "123456.random"
|
||||
response = client.delete(f"/review/{review_id}/viewed")
|
||||
assert response.status_code == 404
|
||||
response_json = response.json()
|
||||
self.assertDictEqual(
|
||||
{"success": False, "message": f"Review {review_id} not found"},
|
||||
response_json,
|
||||
)
|
||||
|
||||
def test_delete_review_viewed(self):
|
||||
now = datetime.now().timestamp()
|
||||
|
||||
with TestClient(self.app) as client:
|
||||
review_id = "123456.review.random"
|
||||
super().insert_mock_review_segment(
|
||||
review_id, now + 1, now + 2, has_been_reviewed=True
|
||||
)
|
||||
review_before = ReviewSegment.get(ReviewSegment.id == review_id)
|
||||
assert review_before.has_been_reviewed == True
|
||||
|
||||
response = client.delete(f"/review/{review_id}/viewed")
|
||||
assert response.status_code == 200
|
||||
response_json = response.json()
|
||||
self.assertDictEqual(
|
||||
{"success": True, "message": f"Set Review {review_id} as not viewed"},
|
||||
response_json,
|
||||
)
|
||||
|
||||
review_after = ReviewSegment.get(ReviewSegment.id == review_id)
|
||||
assert review_after.has_been_reviewed == False
|
||||
|
@ -2,7 +2,6 @@ import datetime
|
||||
import logging
|
||||
import os
|
||||
import unittest
|
||||
from unittest.mock import Mock
|
||||
|
||||
from fastapi.testclient import TestClient
|
||||
from peewee_migrate import Router
|
||||
@ -13,7 +12,6 @@ from playhouse.sqliteq import SqliteQueueDatabase
|
||||
from frigate.api.fastapi_app import create_fastapi_app
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.models import Event, Recordings, Timeline
|
||||
from frigate.stats.emitter import StatsEmitter
|
||||
from frigate.test.const import TEST_DB, TEST_DB_CLEANUPS
|
||||
|
||||
|
||||
@ -111,43 +109,6 @@ class TestHttp(unittest.TestCase):
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
def test_get_event_list(self):
|
||||
app = create_fastapi_app(
|
||||
FrigateConfig(**self.minimal_config),
|
||||
self.db,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
)
|
||||
id = "123456.random"
|
||||
id2 = "7890.random"
|
||||
|
||||
with TestClient(app) as client:
|
||||
_insert_mock_event(id)
|
||||
events = client.get("/events").json()
|
||||
assert events
|
||||
assert len(events) == 1
|
||||
assert events[0]["id"] == id
|
||||
_insert_mock_event(id2)
|
||||
events = client.get("/events").json()
|
||||
assert events
|
||||
assert len(events) == 2
|
||||
events = client.get(
|
||||
"/events",
|
||||
params={"limit": 1},
|
||||
).json()
|
||||
assert events
|
||||
assert len(events) == 1
|
||||
events = client.get(
|
||||
"/events",
|
||||
params={"has_clip": 0},
|
||||
).json()
|
||||
assert not events
|
||||
|
||||
def test_get_good_event(self):
|
||||
app = create_fastapi_app(
|
||||
FrigateConfig(**self.minimal_config),
|
||||
@ -381,25 +342,6 @@ class TestHttp(unittest.TestCase):
|
||||
assert recording
|
||||
assert recording[0]["id"] == id
|
||||
|
||||
def test_stats(self):
|
||||
stats = Mock(spec=StatsEmitter)
|
||||
stats.get_latest_stats.return_value = self.test_stats
|
||||
app = create_fastapi_app(
|
||||
FrigateConfig(**self.minimal_config),
|
||||
self.db,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
None,
|
||||
stats,
|
||||
None,
|
||||
)
|
||||
|
||||
with TestClient(app) as client:
|
||||
full_stats = client.get("/stats").json()
|
||||
assert full_stats == self.test_stats
|
||||
|
||||
|
||||
def _insert_mock_event(
|
||||
id: str,
|
||||
|
@ -5,6 +5,7 @@ import json
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import resource
|
||||
import signal
|
||||
import subprocess as sp
|
||||
import traceback
|
||||
@ -632,3 +633,19 @@ async def get_video_properties(
|
||||
result["fourcc"] = fourcc
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def set_file_limit() -> None:
|
||||
# Newer versions of containerd 2.X+ impose a very low soft file limit of 1024
|
||||
# This applies to OSs like HA OS (see https://github.com/home-assistant/operating-system/issues/4110)
|
||||
# Attempt to increase this limit
|
||||
soft_limit = int(os.getenv("SOFT_FILE_LIMIT", "65536") or "65536")
|
||||
|
||||
current_soft, current_hard = resource.getrlimit(resource.RLIMIT_NOFILE)
|
||||
logger.info(f"Current file limits - Soft: {current_soft}, Hard: {current_hard}")
|
||||
|
||||
new_soft = min(soft_limit, current_hard)
|
||||
resource.setrlimit(resource.RLIMIT_NOFILE, (new_soft, current_hard))
|
||||
logger.info(
|
||||
f"File limit set. New soft limit: {new_soft}, Hard limit remains: {current_hard}"
|
||||
)
|
||||
|
@ -11,6 +11,18 @@
|
||||
"! pip install -q super_gradients==3.7.1"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"! sed -i 's/sghub.deci.ai/sg-hub-nv.s3.amazonaws.com/' /usr/local/lib/python3.10/dist-packages/super_gradients/training/pretrained_models.py\n",
|
||||
"! sed -i 's/sghub.deci.ai/sg-hub-nv.s3.amazonaws.com/' /usr/local/lib/python3.10/dist-packages/super_gradients/training/utils/checkpoint_utils.py"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "NiRCt917KKcL"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@ -72,4 +84,4 @@
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
}
|
||||
}
|
Loading…
Reference in New Issue
Block a user