Use tracked object instead of event language in docs and UI (#13685)

* Verbiage update: use tracked object instead of event

* tweaks
This commit is contained in:
Josh Hawkins 2024-09-11 19:53:58 -05:00 committed by GitHub
parent 62657ad05a
commit b4acf4f341
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
25 changed files with 97 additions and 98 deletions

View File

@ -41,7 +41,7 @@ environment_vars:
### `database`
Event and recording information is managed in a sqlite database at `/config/frigate.db`. If that database is deleted, recordings will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within Home Assistant.
Tracked object and recording information is managed in a sqlite database at `/config/frigate.db`. If that database is deleted, recordings will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within Home Assistant.
If you are storing your database on a network share (SMB, NFS, etc), you may get a `database is locked` error message on startup. You can customize the location of the database in the config if necessary.

View File

@ -187,4 +187,4 @@ ffmpeg:
### TP-Link VIGI Cameras
TP-Link VIGI cameras need some adjustments to the main stream settings on the camera itself to avoid issues. The stream needs to be configured as `H264` with `Smart Coding` set to `off`. Without these settings you may have problems when trying to watch recorded events. For example Firefox will stop playback after a few seconds and show the following error message: `The media playback was aborted due to a corruption problem or because the media used features your browser did not support.`.
TP-Link VIGI cameras need some adjustments to the main stream settings on the camera itself to avoid issues. The stream needs to be configured as `H264` with `Smart Coding` set to `off`. Without these settings you may have problems when trying to watch recorded footage. For example Firefox will stop playback after a few seconds and show the following error message: `The media playback was aborted due to a corruption problem or because the media used features your browser did not support.`.

View File

@ -7,7 +7,7 @@ title: Camera Configuration
Several inputs can be configured for each camera and the role of each input can be mixed and matched based on your needs. This allows you to use a lower resolution stream for object detection, but create recordings from a higher resolution stream, or vice versa.
A camera is enabled by default but can be temporarily disabled by using `enabled: False`. Existing events and recordings can still be accessed. Live streams, recording and detecting are not working. Camera specific configurations will be used.
A camera is enabled by default but can be temporarily disabled by using `enabled: False`. Existing tracked objects and recordings can still be accessed. Live streams, recording and detecting are not working. Camera specific configurations will be used.
Each role can only be assigned to one input per camera. The options for roles are as follows:

View File

@ -5,6 +5,8 @@ title: Generative AI
Generative AI can be used to automatically generate descriptions based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate by providing detailed text descriptions as a basis of the search query.
Semantic Search must be enabled to use Generative AI. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
## Configuration
Generative AI can be enabled for all cameras or only for specific cameras. There are currently 3 providers available to integrate with Frigate.

View File

@ -72,7 +72,7 @@ Here are some common starter configuration examples. Refer to the [reference con
- Hardware acceleration for decoding video
- USB Coral detector
- Save all video with any detectable motion for 7 days regardless of whether any objects were detected or not
- Continue to keep all video if it was during any event for 30 days
- Continue to keep all video if it qualified as an alert or detection for 30 days
- Save snapshots for 30 days
- Motion mask for the camera timestamp
@ -95,10 +95,12 @@ record:
retain:
days: 7
mode: motion
events:
alerts:
retain:
default: 30
mode: motion
days: 30
detections:
retain:
days: 30
snapshots:
enabled: True
@ -128,7 +130,7 @@ cameras:
- VAAPI hardware acceleration for decoding video
- USB Coral detector
- Save all video with any detectable motion for 7 days regardless of whether any objects were detected or not
- Continue to keep all video if it was during any event for 30 days
- Continue to keep all video if it qualified as an alert or detection for 30 days
- Save snapshots for 30 days
- Motion mask for the camera timestamp
@ -149,10 +151,12 @@ record:
retain:
days: 7
mode: motion
events:
alerts:
retain:
default: 30
mode: motion
days: 30
detections:
retain:
days: 30
snapshots:
enabled: True
@ -182,7 +186,7 @@ cameras:
- VAAPI hardware acceleration for decoding video
- OpenVino detector
- Save all video with any detectable motion for 7 days regardless of whether any objects were detected or not
- Continue to keep all video if it was during any event for 30 days
- Continue to keep all video if it qualified as an alert or detection for 30 days
- Save snapshots for 30 days
- Motion mask for the camera timestamp
@ -214,10 +218,12 @@ record:
retain:
days: 7
mode: motion
events:
alerts:
retain:
default: 30
mode: motion
days: 30
detections:
retain:
days: 30
snapshots:
enabled: True

View File

@ -13,11 +13,11 @@ Once motion is detected, it tries to group up nearby areas of motion together in
The default motion settings should work well for the majority of cameras, however there are cases where tuning motion detection can lead to better and more optimal results. Each camera has its own environment with different variables that affect motion, this means that the same motion settings will not fit all of your cameras.
Before tuning motion it is important to understand the goal. In an optimal configuration, motion from people and cars would be detected, but not grass moving, lighting changes, timestamps, etc. If your motion detection is too sensitive, you will experience higher CPU loads and greater false positives from the increased rate of object detection. If it is not sensitive enough, you will miss events.
Before tuning motion it is important to understand the goal. In an optimal configuration, motion from people and cars would be detected, but not grass moving, lighting changes, timestamps, etc. If your motion detection is too sensitive, you will experience higher CPU loads and greater false positives from the increased rate of object detection. If it is not sensitive enough, you will miss objects that you want to track.
## Create Motion Masks
First, mask areas with regular motion not caused by the objects you want to detect. The best way to find candidates for motion masks is by watching the debug stream with motion boxes enabled. Good use cases for motion masks are timestamps or tree limbs and large bushes that regularly move due to wind. When possible, avoid creating motion masks that would block motion detection for objects you want to track **even if they are in locations where you don't want events**. Motion masks should not be used to avoid detecting objects in specific areas. More details can be found [in the masks docs.](/configuration/masks.md).
First, mask areas with regular motion not caused by the objects you want to detect. The best way to find candidates for motion masks is by watching the debug stream with motion boxes enabled. Good use cases for motion masks are timestamps or tree limbs and large bushes that regularly move due to wind. When possible, avoid creating motion masks that would block motion detection for objects you want to track **even if they are in locations where you don't want alerts or detections**. Motion masks should not be used to avoid detecting objects in specific areas. More details can be found [in the masks docs.](/configuration/masks.md).
## Prepare For Testing
@ -29,7 +29,7 @@ Now that things are set up, find a time to tune that represents normal circumsta
:::note
Remember that motion detection is just used to determine when object detection should be used. You should aim to have motion detection sensitive enough that you won't miss events from objects you want to detect with object detection. The goal is to prevent object detection from running constantly for every small pixel change in the image. Windy days are still going to result in lots of motion being detected.
Remember that motion detection is just used to determine when object detection should be used. You should aim to have motion detection sensitive enough that you won't miss objects you want to detect with object detection. The goal is to prevent object detection from running constantly for every small pixel change in the image. Windy days are still going to result in lots of motion being detected.
:::
@ -94,7 +94,7 @@ motion:
:::tip
Some cameras like doorbell cameras may have missed detections when someone walks directly in front of the camera and the lightning_threshold causes motion detection to be re-calibrated. In this case, it may be desirable to increase the `lightning_threshold` to ensure these events are not missed.
Some cameras like doorbell cameras may have missed detections when someone walks directly in front of the camera and the lightning_threshold causes motion detection to be re-calibrated. In this case, it may be desirable to increase the `lightning_threshold` to ensure these objects are not missed.
:::

View File

@ -20,15 +20,13 @@ For object filters in your configuration, any single detection below `min_score`
In frame 2, the score is below the `min_score` value, so Frigate ignores it and it becomes a 0.0. The computed score is the median of the score history (padding to at least 3 values), and only when that computed score crosses the `threshold` is the object marked as a true positive. That happens in frame 4 in the example.
show image of snapshot vs event with differing scores
### Minimum Score
Any detection below `min_score` will be immediately thrown out and never tracked because it is considered a false positive. If `min_score` is too low then false positives may be detected and tracked which can confuse the object tracker and may lead to wasted resources. If `min_score` is too high then lower scoring true positives like objects that are further away or partially occluded may be thrown out which can also confuse the tracker and cause valid events to be lost or disjointed.
Any detection below `min_score` will be immediately thrown out and never tracked because it is considered a false positive. If `min_score` is too low then false positives may be detected and tracked which can confuse the object tracker and may lead to wasted resources. If `min_score` is too high then lower scoring true positives like objects that are further away or partially occluded may be thrown out which can also confuse the tracker and cause valid tracked objects to be lost or disjointed.
### Threshold
`threshold` is used to determine that the object is a true positive. Once an object is detected with a score >= `threshold` object is considered a true positive. If `threshold` is too low then some higher scoring false positives may create an event. If `threshold` is too high then true positive events may be missed due to the object never scoring high enough.
`threshold` is used to determine that the object is a true positive. Once an object is detected with a score >= `threshold` object is considered a true positive. If `threshold` is too low then some higher scoring false positives may create an tracked object. If `threshold` is too high then true positive tracked objects may be missed due to the object never scoring high enough.
## Object Shape
@ -52,7 +50,7 @@ Conceptually, a ratio of 1 is a square, 0.5 is a "tall skinny" box, and 2 is a "
### Zones
[Required zones](/configuration/zones.md) can be a great tool to reduce false positives that may be detected in the sky or other areas that are not of interest. The required zones will only create events for objects that enter the zone.
[Required zones](/configuration/zones.md) can be a great tool to reduce false positives that may be detected in the sky or other areas that are not of interest. The required zones will only create tracked objects for objects that enter the zone.
### Object Masks

View File

@ -3,7 +3,7 @@ id: record
title: Recording
---
Recordings can be enabled and are stored at `/media/frigate/recordings`. The folder structure for the recordings is `YYYY-MM-DD/HH/<camera_name>/MM.SS.mp4` in **UTC time**. These recordings are written directly from your camera stream without re-encoding. Each camera supports a configurable retention policy in the config. Frigate chooses the largest matching retention value between the recording retention and the event retention when determining if a recording should be removed.
Recordings can be enabled and are stored at `/media/frigate/recordings`. The folder structure for the recordings is `YYYY-MM-DD/HH/<camera_name>/MM.SS.mp4` in **UTC time**. These recordings are written directly from your camera stream without re-encoding. Each camera supports a configurable retention policy in the config. Frigate chooses the largest matching retention value between the recording retention and the tracked object retention when determining if a recording should be removed.
New recording segments are written from the camera stream to cache, they are only moved to disk if they match the setup recording retention policy.
@ -53,7 +53,7 @@ record:
### Minimum: Alerts only
If you only want to retain video that occurs during an event, this config will discard video unless an alert is ongoing.
If you only want to retain video that occurs during a tracked object, this config will discard video unless an alert is ongoing.
```yaml
record:
@ -72,7 +72,7 @@ As of Frigate 0.12 if there is less than an hour left of storage, the oldest 2 h
## Configuring Recording Retention
Frigate supports both continuous and event based recordings with separate retention modes and retention periods.
Frigate supports both continuous and tracked object based recordings with separate retention modes and retention periods.
:::tip
@ -95,7 +95,7 @@ Continuous recording supports different retention modes [which are described bel
### Object Recording
The number of days to record review items can be specified for review items classified as alerts as well as events.
The number of days to record review items can be specified for review items classified as alerts as well as tracked objects.
```yaml
record:
@ -108,13 +108,13 @@ record:
days: 10 # <- number of days to keep detections recordings
```
This configuration will retain recording segments that overlap with alerts and detections for 10 days. Because multiple events can reference the same recording segments, this avoids storing duplicate footage for overlapping events and reduces overall storage needs.
This configuration will retain recording segments that overlap with alerts and detections for 10 days. Because multiple tracked objects can reference the same recording segments, this avoids storing duplicate footage for overlapping tracked objects and reduces overall storage needs.
**WARNING**: Recordings still must be enabled in the config. If a camera has recordings disabled in the config, enabling via the methods listed above will have no effect.
## What do the different retain modes mean?
Frigate saves from the stream with the `record` role in 10 second segments. These options determine which recording segments are kept for continuous recording (but can also affect events).
Frigate saves from the stream with the `record` role in 10 second segments. These options determine which recording segments are kept for continuous recording (but can also affect tracked objects).
Let's say you have Frigate configured so that your doorbell camera would retain the last **2** days of continuous recording.

View File

@ -271,13 +271,13 @@ detect:
# especially when using separate streams for detect and record.
# Use this setting to make the timeline bounding boxes more closely align
# with the recording. The value can be positive or negative.
# TIP: Imagine there is an event clip with a person walking from left to right.
# If the event timeline bounding box is consistently to the left of the person
# TIP: Imagine there is an tracked object clip with a person walking from left to right.
# If the tracked object lifecycle bounding box is consistently to the left of the person
# then the value should be decreased. Similarly, if a person is walking from
# left to right and the bounding box is consistently ahead of the person
# then the value should be increased.
# TIP: This offset is dynamic so you can change the value and it will update existing
# events, this makes it easy to tune.
# tracked objects, this makes it easy to tune.
# WARNING: Fast moving objects will likely not have the bounding box align.
annotation_offset: 0
@ -394,9 +394,9 @@ record:
sync_recordings: False
# Optional: Retention settings for recording
retain:
# Optional: Number of days to retain recordings regardless of events (default: shown below)
# NOTE: This should be set to 0 and retention should be defined in events section below
# if you only want to retain recordings of events.
# Optional: Number of days to retain recordings regardless of tracked objects (default: shown below)
# NOTE: This should be set to 0 and retention should be defined in alerts and detections section below
# if you only want to retain recordings of alerts and detections.
days: 0
# Optional: Mode for retention. Available options are: all, motion, and active_objects
# all - save all recording segments regardless of activity
@ -460,7 +460,7 @@ record:
# never stored, so setting the mode to "all" here won't bring them back.
mode: motion
# Optional: Configuration for the jpg snapshots written to the clips directory for each event
# Optional: Configuration for the jpg snapshots written to the clips directory for each tracked object
# NOTE: Can be overridden at the camera level
snapshots:
# Optional: Enable writing jpg snapshot to /media/frigate/clips (default: shown below)
@ -491,10 +491,10 @@ snapshots:
semantic_search:
# Optional: Enable semantic search (default: shown below)
enabled: False
# Optional: Re-index embeddings database from historical events (default: shown below)
# Optional: Re-index embeddings database from historical tracked objects (default: shown below)
reindex: False
# Optional: Configuration for AI generated event descriptions
# Optional: Configuration for AI generated tracked object descriptions
# NOTE: Semantic Search must be enabled for this to do anything.
# WARNING: Depending on the provider, this will send thumbnails over the internet
# to Google or OpenAI's LLMs to generate descriptions. It can be overridden at

View File

@ -21,7 +21,7 @@ Birdseye RTSP restream can be accessed at `rtsp://<frigate_host>:8554/birdseye`.
```yaml
birdseye:
restream: true
restream: True
```
### Securing Restream With Authentication

View File

@ -7,13 +7,13 @@ The Review page of the Frigate UI is for quickly reviewing historical footage of
Review items are filterable by date, object type, and camera.
### Review items vs. events
### Review items vs. tracked objects (formerly "events")
In Frigate 0.13 and earlier versions, the UI presented "events". An event was synonymous with a tracked or detected object. In Frigate 0.14 and later, a review item is a time period where any number of tracked objects were active.
For example, consider a situation where two people walked past your house. One was walking a dog. At the same time, a car drove by on the street behind them.
In this scenario, Frigate 0.13 and earlier would show 4 events in the UI - one for each person, another for the dog, and yet another for the car. You would have had 4 separate videos to watch even though they would have all overlapped.
In this scenario, Frigate 0.13 and earlier would show 4 "events" in the UI - one for each person, another for the dog, and yet another for the car. You would have had 4 separate videos to watch even though they would have all overlapped.
In 0.14 and later, all of that is bundled into a single review item which starts and ends to capture all of that activity. Reviews for a single camera cannot overlap. Once you have watched that time period on that camera, it is marked as reviewed.

View File

@ -3,13 +3,15 @@ id: semantic_search
title: Using Semantic Search
---
The Search feature in Frigate allows you to find tracked objects within your review items using either the image itself, a user-defined text description, or an automatically generated one. This semantic search functionality works by creating _embeddings_ — numerical vector representations — for both the images and text descriptions of your tracked objects. By comparing these embeddings, Frigate assesses their similarities to deliver relevant search results.
Semantic Search in Frigate allows you to find tracked objects within your review items using either the image itself, a user-defined text description, or an automatically generated one. This feature works by creating _embeddings_ — numerical vector representations — for both the images and text descriptions of your tracked objects. By comparing these embeddings, Frigate assesses their similarities to deliver relevant search results.
Frigate has support for two models to create embeddings, both of which run locally: [OpenAI CLIP](https://openai.com/research/clip) and [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). Embeddings are then saved to a local instance of [ChromaDB](https://trychroma.com).
Semantic Search is accessed via the _Explore_ view in the Frigate UI.
## Configuration
Semantic Search is a global configuration setting.
Semantic search is disabled by default, and must be enabled in your config file before it can be used. Semantic Search is a global configuration setting.
```yaml
semantic_search:
@ -31,7 +33,7 @@ This model is able to embed both images and text into the same vector space, whi
### all-MiniLM-L6-v2
This is a sentence embedding model that has been fine tuned on over 1 billion sentence pairs. This model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Search page when clicking on the gray tracked object chip at the top left of each review item. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate event descriptions.
This is a sentence embedding model that has been fine tuned on over 1 billion sentence pairs. This model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Search page when clicking on the gray tracked object chip at the top left of each review item. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate tracked object descriptions.
## Usage

View File

@ -64,7 +64,7 @@ cameras:
### Restricting zones to specific objects
Sometimes you want to limit a zone to specific object types to have more granular control of when events/snapshots are saved. The following example will limit one zone to person objects and the other to cars.
Sometimes you want to limit a zone to specific object types to have more granular control of when alerts, detections, and snapshots are saved. The following example will limit one zone to person objects and the other to cars.
```yaml
cameras:
@ -80,7 +80,7 @@ cameras:
- car
```
Only car objects can trigger the `front_yard_street` zone and only person can trigger the `entire_yard`. You will get events for person objects that enter anywhere in the yard, and events for cars only if they enter the street.
Only car objects can trigger the `front_yard_street` zone and only person can trigger the `entire_yard`. Objects will be tracked for any `person` that enter anywhere in the yard, and for cars only if they enter the street.
### Zone Loitering

View File

@ -16,10 +16,6 @@ A box returned from the object detection model that outlines an object in the fr
- A gray thin line indicates that object is detected as being stationary
- A thick line indicates that object is the subject of autotracking (when enabled).
## Event
The time period starting when a tracked object entered the frame and ending when it left the frame, including any time that the object remained still. Events are saved when it is considered a [true positive](#threshold) and meets the requirements for a snapshot or recording to be saved.
## False Positive
An incorrect detection of an object type. For example a dog being detected as a person, a chair being detected as a dog, etc. A person being detected in an area you want to ignore is not a false positive.
@ -64,6 +60,10 @@ The threshold is the median score that an object must reach in order to be consi
The top score for an object is the highest median score for an object.
## Tracked Object ("event" in previous versions)
The time period starting when a tracked object entered the frame and ending when it left the frame, including any time that the object remained still. Tracked objects are saved when it is considered a [true positive](#threshold) and meets the requirements for a snapshot or recording to be saved.
## Zone
Zones are areas of interest, zones can be used for notifications and for limiting the areas where Frigate will create an [event](#event). [See the zone docs for more info](/configuration/zones)

View File

@ -238,7 +238,7 @@ Now that you know where you need to mask, use the "Mask & Zone creator" in the o
:::warning
Note that motion masks should not be used to mark out areas where you do not want objects to be detected or to reduce false positives. They do not alter the image sent to object detection, so you can still get events and detections in areas with motion masks. These only prevent motion in these areas from initiating object detection.
Note that motion masks should not be used to mark out areas where you do not want objects to be detected or to reduce false positives. They do not alter the image sent to object detection, so you can still get tracked objects, alerts, and detections in areas with motion masks. These only prevent motion in these areas from initiating object detection.
:::
@ -302,7 +302,7 @@ If you only plan to use Frigate for recording, it is still recommended to define
:::
By default, Frigate will retain video of all events for 10 days. The full set of options for recording can be found [here](../configuration/reference.md).
By default, Frigate will retain video of all tracked objects for 10 days. The full set of options for recording can be found [here](../configuration/reference.md).
### Step 7: Complete config

View File

@ -7,11 +7,11 @@ The best way to get started with notifications for Frigate is to use the [Bluepr
It is generally recommended to trigger notifications based on the `frigate/reviews` mqtt topic. This provides the event_id(s) needed to fetch [thumbnails/snapshots/clips](../integrations/home-assistant.md#notification-api) and other useful information to customize when and where you want to receive alerts. The data is published in the form of a change feed, which means you can reference the "previous state" of the object in the `before` section and the "current state" of the object in the `after` section. You can see an example [here](../integrations/mqtt.md#frigateevents).
Here is a simple example of a notification automation of events which will update the existing notification for each change. This means the image you see in the notification will update as Frigate finds a "better" image.
Here is a simple example of a notification automation of tracked objects which will update the existing notification for each change. This means the image you see in the notification will update as Frigate finds a "better" image.
```yaml
automation:
- alias: Notify of events
- alias: Notify of tracked object
trigger:
platform: mqtt
topic: frigate/events

View File

@ -189,15 +189,15 @@ Example parameters:
### `GET /api/<camera_name>/<label>/thumbnail.jpg`
Returns the thumbnail from the latest event for the given camera and label combo. Using `any` as the label will return the latest thumbnail regardless of type.
Returns the thumbnail from the latest tracked object for the given camera and label combo. Using `any` as the label will return the latest thumbnail regardless of type.
### `GET /api/<camera_name>/<label>/clip.mp4`
Returns the clip from the latest event for the given camera and label combo. Using `any` as the label will return the latest clip regardless of type.
Returns the clip from the latest tracked object for the given camera and label combo. Using `any` as the label will return the latest clip regardless of type.
### `GET /api/<camera_name>/<label>/snapshot.jpg`
Returns the snapshot image from the latest event for the given camera and label combo. Using `any` as the label will return the latest thumbnail regardless of type.
Returns the snapshot image from the latest tracked object for the given camera and label combo. Using `any` as the label will return the latest thumbnail regardless of type.
### `GET /api/<camera_name>/grid.jpg`
@ -385,9 +385,9 @@ Specific preview frame from preview cache.
Looping image made from preview video / frames during this time range.
| param | Type | Description |
| --------- | ---- | -------------------------------- |
| `format` | str | Format of preview [`gif`, `mp4`] |
| param | Type | Description |
| -------- | ---- | -------------------------------- |
| `format` | str | Format of preview [`gif`, `mp4`] |
## Recordings

View File

@ -148,19 +148,19 @@ Home Assistant > Configuration > Integrations > Frigate > Options
## Entities Provided
| Platform | Description |
| --------------- | --------------------------------------------------------------------------------- |
| `camera` | Live camera stream (requires RTSP). |
| `image` | Image of the latest detected object for each camera. |
| `sensor` | States to monitor Frigate performance, object counts for all zones and cameras. |
| `switch` | Switch entities to toggle detection, recordings and snapshots. |
| `binary_sensor` | A "motion" binary sensor entity per camera/zone/object. |
| Platform | Description |
| --------------- | ------------------------------------------------------------------------------- |
| `camera` | Live camera stream (requires RTSP). |
| `image` | Image of the latest detected object for each camera. |
| `sensor` | States to monitor Frigate performance, object counts for all zones and cameras. |
| `switch` | Switch entities to toggle detection, recordings and snapshots. |
| `binary_sensor` | A "motion" binary sensor entity per camera/zone/object. |
## Media Browser Support
The integration provides:
- Browsing event recordings with thumbnails
- Browsing tracked object recordings with thumbnails
- Browsing snapshots
- Browsing recordings by month, day, camera, time
@ -183,19 +183,19 @@ For clips to be castable to media devices, audio is required and may need to be
Many people do not want to expose Frigate to the web, so the integration creates some public API endpoints that can be used for notifications.
To load a thumbnail for an event:
To load a thumbnail for a tracked object:
```
https://HA_URL/api/frigate/notifications/<event-id>/thumbnail.jpg
```
To load a snapshot for an event:
To load a snapshot for a tracked object:
```
https://HA_URL/api/frigate/notifications/<event-id>/snapshot.jpg
```
To load a video clip of an event:
To load a video clip of a tracked object:
```
https://HA_URL/api/frigate/notifications/<event-id>/clip.mp4

View File

@ -19,7 +19,7 @@ Causes Frigate to exit. Docker should be configured to automatically restart the
### `frigate/events`
Message published for each changed event. The first message is published when the tracked object is no longer marked as a false_positive. When Frigate finds a better snapshot of the tracked object or when a zone change occurs, it will publish a message with the same id. When the event ends, a final message is published with `end_time` set.
Message published for each changed tracked object. The first message is published when the tracked object is no longer marked as a false_positive. When Frigate finds a better snapshot of the tracked object or when a zone change occurs, it will publish a message with the same id. When the tracked object ends, a final message is published with `end_time` set.
```json
{
@ -100,24 +100,22 @@ Message published for each changed review item. The first message is published w
```json
{
"type": "update", // new, update, end
"type": "update", // new, update, end
"before": {
"id": "1718987129.308396-fqk5ka", // review_id
"id": "1718987129.308396-fqk5ka", // review_id
"camera": "front_cam",
"start_time": 1718987129.308396,
"end_time": null,
"severity": "detection",
"thumb_path": "/media/frigate/clips/review/thumb-front_cam-1718987129.308396-fqk5ka.webp",
"data": {
"detections": [ // list of event IDs
"detections": [
// list of event IDs
"1718987128.947436-g92ztx",
"1718987148.879516-d7oq7r",
"1718987126.934663-q5ywpt"
],
"objects": [
"person",
"car"
],
"objects": ["person", "car"],
"sub_labels": [],
"zones": [],
"audio": []
@ -136,14 +134,9 @@ Message published for each changed review item. The first message is published w
"1718987148.879516-d7oq7r",
"1718987126.934663-q5ywpt"
],
"objects": [
"person",
"car"
],
"objects": ["person", "car"],
"sub_labels": ["Bob"],
"zones": [
"front_yard"
],
"zones": ["front_yard"],
"audio": []
}
}
@ -175,13 +168,11 @@ Publishes the count of active objects for the camera for use as a sensor in Home
Assistant. `all` can be used as the object_name for the count of all active objects
for the camera.
### `frigate/<zone_name>/<object_name>`
Publishes the count of objects for the zone for use as a sensor in Home Assistant.
`all` can be used as the object_name for the count of all objects for the zone.
### `frigate/<zone_name>/<object_name>/active`
Publishes the count of active objects for the zone for use as a sensor in Home

View File

@ -19,7 +19,7 @@ Once logged in, you can generate an API key for Frigate in Settings.
### Set your API key
In Frigate, you can use an environment variable or a docker secret named `PLUS_API_KEY` to enable the `SEND TO FRIGATE+` buttons on the events page. Home Assistant Addon users can set it under Settings > Addons > Frigate NVR > Configuration > Options (be sure to toggle the "Show unused optional configuration options" switch).
In Frigate, you can use an environment variable or a docker secret named `PLUS_API_KEY` to enable the `Frigate+` buttons on the Explore page. Home Assistant Addon users can set it under Settings > Addons > Frigate NVR > Configuration > Options (be sure to toggle the "Show unused optional configuration options" switch).
:::warning
@ -29,7 +29,7 @@ You cannot use the `environment_vars` section of your configuration file to set
## Submit examples
Once your API key is configured, you can submit examples directly from the events page in Frigate using the `SEND TO FRIGATE+` button.
Once your API key is configured, you can submit examples directly from the Explore page in Frigate using the `Frigate+` button.
:::note

View File

@ -33,7 +33,7 @@ Frigate+ models support a more relevant set of objects for security cameras. Cur
### Label attributes
Frigate has special handling for some labels when using Frigate+ models. `face`, `license_plate`, `amazon`, `ups`, and `fedex` are considered attribute labels which are not tracked like regular objects and do not generate events. In addition, the `threshold` filter will have no effect on these labels. You should adjust the `min_score` and other filter values as needed.
Frigate has special handling for some labels when using Frigate+ models. `face`, `license_plate`, `amazon`, `ups`, and `fedex` are considered attribute labels which are not tracked like regular objects and do not generate review items directly. In addition, the `threshold` filter will have no effect on these labels. You should adjust the `min_score` and other filter values as needed.
In order to have Frigate start using these attribute labels, you will need to add them to the list of objects to track:

View File

@ -17,7 +17,7 @@ ffmpeg:
record: preset-record-generic-audio-aac
```
### I can't view events or recordings in the Web UI.
### I can't view recordings in the Web UI.
Ensure your cameras send h264 encoded video, or [transcode them](/configuration/restream.md).

View File

@ -104,7 +104,7 @@ function ThumbnailRow({
// @ts-expect-error we know this is correct
searchResults[0].event_count
}{" "}
detected objects){" "}
tracked objects){" "}
</span>
)}
</div>

View File

@ -116,7 +116,7 @@ export default function SearchView({
>
<Input
className="text-md w-full bg-muted pr-10"
placeholder={"Search for a detected object..."}
placeholder={"Search for a tracked object..."}
value={search}
onChange={(e) => setSearch(e.target.value)}
/>
@ -145,7 +145,7 @@ export default function SearchView({
{searchTerm.length > 0 && searchResults?.length == 0 && (
<div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center text-center">
<LuSearchX className="size-16" />
No Detected Objects Found
No Tracked Objects Found
</div>
)}

View File

@ -34,7 +34,7 @@ export default function ObjectSettingsView({
{
param: "bbox",
title: "Bounding boxes",
description: "Show bounding boxes around detected objects",
description: "Show bounding boxes around tracked objects",
},
{
param: "timestamp",
@ -130,7 +130,7 @@ export default function ObjectSettingsView({
to detect objects in your camera's video stream.
</p>
<p>
Debugging view shows a real-time view of detected objects and their
Debugging view shows a real-time view of tracked objects and their
statistics. The object list shows a time-delayed summary of detected
objects.
</p>