diff --git a/docs/docs/configuration/advanced.md b/docs/docs/configuration/advanced.md index b237d4b66..730b0f6b4 100644 --- a/docs/docs/configuration/advanced.md +++ b/docs/docs/configuration/advanced.md @@ -41,7 +41,7 @@ environment_vars: ### `database` -Event and recording information is managed in a sqlite database at `/config/frigate.db`. If that database is deleted, recordings will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within Home Assistant. +Tracked object and recording information is managed in a sqlite database at `/config/frigate.db`. If that database is deleted, recordings will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within Home Assistant. If you are storing your database on a network share (SMB, NFS, etc), you may get a `database is locked` error message on startup. You can customize the location of the database in the config if necessary. diff --git a/docs/docs/configuration/camera_specific.md b/docs/docs/configuration/camera_specific.md index a3619c5fb..689465d90 100644 --- a/docs/docs/configuration/camera_specific.md +++ b/docs/docs/configuration/camera_specific.md @@ -187,4 +187,4 @@ ffmpeg: ### TP-Link VIGI Cameras -TP-Link VIGI cameras need some adjustments to the main stream settings on the camera itself to avoid issues. The stream needs to be configured as `H264` with `Smart Coding` set to `off`. Without these settings you may have problems when trying to watch recorded events. For example Firefox will stop playback after a few seconds and show the following error message: `The media playback was aborted due to a corruption problem or because the media used features your browser did not support.`. +TP-Link VIGI cameras need some adjustments to the main stream settings on the camera itself to avoid issues. The stream needs to be configured as `H264` with `Smart Coding` set to `off`. Without these settings you may have problems when trying to watch recorded footage. For example Firefox will stop playback after a few seconds and show the following error message: `The media playback was aborted due to a corruption problem or because the media used features your browser did not support.`. diff --git a/docs/docs/configuration/cameras.md b/docs/docs/configuration/cameras.md index 21680af87..c1c7d7cba 100644 --- a/docs/docs/configuration/cameras.md +++ b/docs/docs/configuration/cameras.md @@ -7,7 +7,7 @@ title: Camera Configuration Several inputs can be configured for each camera and the role of each input can be mixed and matched based on your needs. This allows you to use a lower resolution stream for object detection, but create recordings from a higher resolution stream, or vice versa. -A camera is enabled by default but can be temporarily disabled by using `enabled: False`. Existing events and recordings can still be accessed. Live streams, recording and detecting are not working. Camera specific configurations will be used. +A camera is enabled by default but can be temporarily disabled by using `enabled: False`. Existing tracked objects and recordings can still be accessed. Live streams, recording and detecting are not working. Camera specific configurations will be used. Each role can only be assigned to one input per camera. The options for roles are as follows: diff --git a/docs/docs/configuration/genai.md b/docs/docs/configuration/genai.md index 95e881a57..623cf588c 100644 --- a/docs/docs/configuration/genai.md +++ b/docs/docs/configuration/genai.md @@ -5,6 +5,8 @@ title: Generative AI Generative AI can be used to automatically generate descriptions based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate by providing detailed text descriptions as a basis of the search query. +Semantic Search must be enabled to use Generative AI. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail. + ## Configuration Generative AI can be enabled for all cameras or only for specific cameras. There are currently 3 providers available to integrate with Frigate. diff --git a/docs/docs/configuration/index.md b/docs/docs/configuration/index.md index 01fe97530..a0b58558d 100644 --- a/docs/docs/configuration/index.md +++ b/docs/docs/configuration/index.md @@ -72,7 +72,7 @@ Here are some common starter configuration examples. Refer to the [reference con - Hardware acceleration for decoding video - USB Coral detector - Save all video with any detectable motion for 7 days regardless of whether any objects were detected or not -- Continue to keep all video if it was during any event for 30 days +- Continue to keep all video if it qualified as an alert or detection for 30 days - Save snapshots for 30 days - Motion mask for the camera timestamp @@ -95,10 +95,12 @@ record: retain: days: 7 mode: motion - events: + alerts: retain: - default: 30 - mode: motion + days: 30 + detections: + retain: + days: 30 snapshots: enabled: True @@ -128,7 +130,7 @@ cameras: - VAAPI hardware acceleration for decoding video - USB Coral detector - Save all video with any detectable motion for 7 days regardless of whether any objects were detected or not -- Continue to keep all video if it was during any event for 30 days +- Continue to keep all video if it qualified as an alert or detection for 30 days - Save snapshots for 30 days - Motion mask for the camera timestamp @@ -149,10 +151,12 @@ record: retain: days: 7 mode: motion - events: + alerts: retain: - default: 30 - mode: motion + days: 30 + detections: + retain: + days: 30 snapshots: enabled: True @@ -182,7 +186,7 @@ cameras: - VAAPI hardware acceleration for decoding video - OpenVino detector - Save all video with any detectable motion for 7 days regardless of whether any objects were detected or not -- Continue to keep all video if it was during any event for 30 days +- Continue to keep all video if it qualified as an alert or detection for 30 days - Save snapshots for 30 days - Motion mask for the camera timestamp @@ -214,10 +218,12 @@ record: retain: days: 7 mode: motion - events: + alerts: retain: - default: 30 - mode: motion + days: 30 + detections: + retain: + days: 30 snapshots: enabled: True diff --git a/docs/docs/configuration/motion_detection.md b/docs/docs/configuration/motion_detection.md index 0f48f62b2..0844c04a8 100644 --- a/docs/docs/configuration/motion_detection.md +++ b/docs/docs/configuration/motion_detection.md @@ -13,11 +13,11 @@ Once motion is detected, it tries to group up nearby areas of motion together in The default motion settings should work well for the majority of cameras, however there are cases where tuning motion detection can lead to better and more optimal results. Each camera has its own environment with different variables that affect motion, this means that the same motion settings will not fit all of your cameras. -Before tuning motion it is important to understand the goal. In an optimal configuration, motion from people and cars would be detected, but not grass moving, lighting changes, timestamps, etc. If your motion detection is too sensitive, you will experience higher CPU loads and greater false positives from the increased rate of object detection. If it is not sensitive enough, you will miss events. +Before tuning motion it is important to understand the goal. In an optimal configuration, motion from people and cars would be detected, but not grass moving, lighting changes, timestamps, etc. If your motion detection is too sensitive, you will experience higher CPU loads and greater false positives from the increased rate of object detection. If it is not sensitive enough, you will miss objects that you want to track. ## Create Motion Masks -First, mask areas with regular motion not caused by the objects you want to detect. The best way to find candidates for motion masks is by watching the debug stream with motion boxes enabled. Good use cases for motion masks are timestamps or tree limbs and large bushes that regularly move due to wind. When possible, avoid creating motion masks that would block motion detection for objects you want to track **even if they are in locations where you don't want events**. Motion masks should not be used to avoid detecting objects in specific areas. More details can be found [in the masks docs.](/configuration/masks.md). +First, mask areas with regular motion not caused by the objects you want to detect. The best way to find candidates for motion masks is by watching the debug stream with motion boxes enabled. Good use cases for motion masks are timestamps or tree limbs and large bushes that regularly move due to wind. When possible, avoid creating motion masks that would block motion detection for objects you want to track **even if they are in locations where you don't want alerts or detections**. Motion masks should not be used to avoid detecting objects in specific areas. More details can be found [in the masks docs.](/configuration/masks.md). ## Prepare For Testing @@ -29,7 +29,7 @@ Now that things are set up, find a time to tune that represents normal circumsta :::note -Remember that motion detection is just used to determine when object detection should be used. You should aim to have motion detection sensitive enough that you won't miss events from objects you want to detect with object detection. The goal is to prevent object detection from running constantly for every small pixel change in the image. Windy days are still going to result in lots of motion being detected. +Remember that motion detection is just used to determine when object detection should be used. You should aim to have motion detection sensitive enough that you won't miss objects you want to detect with object detection. The goal is to prevent object detection from running constantly for every small pixel change in the image. Windy days are still going to result in lots of motion being detected. ::: @@ -94,7 +94,7 @@ motion: :::tip -Some cameras like doorbell cameras may have missed detections when someone walks directly in front of the camera and the lightning_threshold causes motion detection to be re-calibrated. In this case, it may be desirable to increase the `lightning_threshold` to ensure these events are not missed. +Some cameras like doorbell cameras may have missed detections when someone walks directly in front of the camera and the lightning_threshold causes motion detection to be re-calibrated. In this case, it may be desirable to increase the `lightning_threshold` to ensure these objects are not missed. ::: diff --git a/docs/docs/configuration/object_filters.md b/docs/docs/configuration/object_filters.md index 3339df904..ca7260094 100644 --- a/docs/docs/configuration/object_filters.md +++ b/docs/docs/configuration/object_filters.md @@ -20,15 +20,13 @@ For object filters in your configuration, any single detection below `min_score` In frame 2, the score is below the `min_score` value, so Frigate ignores it and it becomes a 0.0. The computed score is the median of the score history (padding to at least 3 values), and only when that computed score crosses the `threshold` is the object marked as a true positive. That happens in frame 4 in the example. -show image of snapshot vs event with differing scores - ### Minimum Score -Any detection below `min_score` will be immediately thrown out and never tracked because it is considered a false positive. If `min_score` is too low then false positives may be detected and tracked which can confuse the object tracker and may lead to wasted resources. If `min_score` is too high then lower scoring true positives like objects that are further away or partially occluded may be thrown out which can also confuse the tracker and cause valid events to be lost or disjointed. +Any detection below `min_score` will be immediately thrown out and never tracked because it is considered a false positive. If `min_score` is too low then false positives may be detected and tracked which can confuse the object tracker and may lead to wasted resources. If `min_score` is too high then lower scoring true positives like objects that are further away or partially occluded may be thrown out which can also confuse the tracker and cause valid tracked objects to be lost or disjointed. ### Threshold -`threshold` is used to determine that the object is a true positive. Once an object is detected with a score >= `threshold` object is considered a true positive. If `threshold` is too low then some higher scoring false positives may create an event. If `threshold` is too high then true positive events may be missed due to the object never scoring high enough. +`threshold` is used to determine that the object is a true positive. Once an object is detected with a score >= `threshold` object is considered a true positive. If `threshold` is too low then some higher scoring false positives may create an tracked object. If `threshold` is too high then true positive tracked objects may be missed due to the object never scoring high enough. ## Object Shape @@ -52,7 +50,7 @@ Conceptually, a ratio of 1 is a square, 0.5 is a "tall skinny" box, and 2 is a " ### Zones -[Required zones](/configuration/zones.md) can be a great tool to reduce false positives that may be detected in the sky or other areas that are not of interest. The required zones will only create events for objects that enter the zone. +[Required zones](/configuration/zones.md) can be a great tool to reduce false positives that may be detected in the sky or other areas that are not of interest. The required zones will only create tracked objects for objects that enter the zone. ### Object Masks diff --git a/docs/docs/configuration/record.md b/docs/docs/configuration/record.md index 8ba12a30d..e0e42f22f 100644 --- a/docs/docs/configuration/record.md +++ b/docs/docs/configuration/record.md @@ -3,7 +3,7 @@ id: record title: Recording --- -Recordings can be enabled and are stored at `/media/frigate/recordings`. The folder structure for the recordings is `YYYY-MM-DD/HH//MM.SS.mp4` in **UTC time**. These recordings are written directly from your camera stream without re-encoding. Each camera supports a configurable retention policy in the config. Frigate chooses the largest matching retention value between the recording retention and the event retention when determining if a recording should be removed. +Recordings can be enabled and are stored at `/media/frigate/recordings`. The folder structure for the recordings is `YYYY-MM-DD/HH//MM.SS.mp4` in **UTC time**. These recordings are written directly from your camera stream without re-encoding. Each camera supports a configurable retention policy in the config. Frigate chooses the largest matching retention value between the recording retention and the tracked object retention when determining if a recording should be removed. New recording segments are written from the camera stream to cache, they are only moved to disk if they match the setup recording retention policy. @@ -53,7 +53,7 @@ record: ### Minimum: Alerts only -If you only want to retain video that occurs during an event, this config will discard video unless an alert is ongoing. +If you only want to retain video that occurs during a tracked object, this config will discard video unless an alert is ongoing. ```yaml record: @@ -72,7 +72,7 @@ As of Frigate 0.12 if there is less than an hour left of storage, the oldest 2 h ## Configuring Recording Retention -Frigate supports both continuous and event based recordings with separate retention modes and retention periods. +Frigate supports both continuous and tracked object based recordings with separate retention modes and retention periods. :::tip @@ -95,7 +95,7 @@ Continuous recording supports different retention modes [which are described bel ### Object Recording -The number of days to record review items can be specified for review items classified as alerts as well as events. +The number of days to record review items can be specified for review items classified as alerts as well as tracked objects. ```yaml record: @@ -108,13 +108,13 @@ record: days: 10 # <- number of days to keep detections recordings ``` -This configuration will retain recording segments that overlap with alerts and detections for 10 days. Because multiple events can reference the same recording segments, this avoids storing duplicate footage for overlapping events and reduces overall storage needs. +This configuration will retain recording segments that overlap with alerts and detections for 10 days. Because multiple tracked objects can reference the same recording segments, this avoids storing duplicate footage for overlapping tracked objects and reduces overall storage needs. **WARNING**: Recordings still must be enabled in the config. If a camera has recordings disabled in the config, enabling via the methods listed above will have no effect. ## What do the different retain modes mean? -Frigate saves from the stream with the `record` role in 10 second segments. These options determine which recording segments are kept for continuous recording (but can also affect events). +Frigate saves from the stream with the `record` role in 10 second segments. These options determine which recording segments are kept for continuous recording (but can also affect tracked objects). Let's say you have Frigate configured so that your doorbell camera would retain the last **2** days of continuous recording. diff --git a/docs/docs/configuration/reference.md b/docs/docs/configuration/reference.md index fa5471f97..2eae6ef90 100644 --- a/docs/docs/configuration/reference.md +++ b/docs/docs/configuration/reference.md @@ -271,13 +271,13 @@ detect: # especially when using separate streams for detect and record. # Use this setting to make the timeline bounding boxes more closely align # with the recording. The value can be positive or negative. - # TIP: Imagine there is an event clip with a person walking from left to right. - # If the event timeline bounding box is consistently to the left of the person + # TIP: Imagine there is an tracked object clip with a person walking from left to right. + # If the tracked object lifecycle bounding box is consistently to the left of the person # then the value should be decreased. Similarly, if a person is walking from # left to right and the bounding box is consistently ahead of the person # then the value should be increased. # TIP: This offset is dynamic so you can change the value and it will update existing - # events, this makes it easy to tune. + # tracked objects, this makes it easy to tune. # WARNING: Fast moving objects will likely not have the bounding box align. annotation_offset: 0 @@ -394,9 +394,9 @@ record: sync_recordings: False # Optional: Retention settings for recording retain: - # Optional: Number of days to retain recordings regardless of events (default: shown below) - # NOTE: This should be set to 0 and retention should be defined in events section below - # if you only want to retain recordings of events. + # Optional: Number of days to retain recordings regardless of tracked objects (default: shown below) + # NOTE: This should be set to 0 and retention should be defined in alerts and detections section below + # if you only want to retain recordings of alerts and detections. days: 0 # Optional: Mode for retention. Available options are: all, motion, and active_objects # all - save all recording segments regardless of activity @@ -460,7 +460,7 @@ record: # never stored, so setting the mode to "all" here won't bring them back. mode: motion -# Optional: Configuration for the jpg snapshots written to the clips directory for each event +# Optional: Configuration for the jpg snapshots written to the clips directory for each tracked object # NOTE: Can be overridden at the camera level snapshots: # Optional: Enable writing jpg snapshot to /media/frigate/clips (default: shown below) @@ -491,10 +491,10 @@ snapshots: semantic_search: # Optional: Enable semantic search (default: shown below) enabled: False - # Optional: Re-index embeddings database from historical events (default: shown below) + # Optional: Re-index embeddings database from historical tracked objects (default: shown below) reindex: False -# Optional: Configuration for AI generated event descriptions +# Optional: Configuration for AI generated tracked object descriptions # NOTE: Semantic Search must be enabled for this to do anything. # WARNING: Depending on the provider, this will send thumbnails over the internet # to Google or OpenAI's LLMs to generate descriptions. It can be overridden at diff --git a/docs/docs/configuration/restream.md b/docs/docs/configuration/restream.md index f9d4e4094..211050972 100644 --- a/docs/docs/configuration/restream.md +++ b/docs/docs/configuration/restream.md @@ -21,7 +21,7 @@ Birdseye RTSP restream can be accessed at `rtsp://:8554/birdseye`. ```yaml birdseye: - restream: true + restream: True ``` ### Securing Restream With Authentication diff --git a/docs/docs/configuration/review.md b/docs/docs/configuration/review.md index 4a39924eb..f8e6dfff5 100644 --- a/docs/docs/configuration/review.md +++ b/docs/docs/configuration/review.md @@ -7,13 +7,13 @@ The Review page of the Frigate UI is for quickly reviewing historical footage of Review items are filterable by date, object type, and camera. -### Review items vs. events +### Review items vs. tracked objects (formerly "events") In Frigate 0.13 and earlier versions, the UI presented "events". An event was synonymous with a tracked or detected object. In Frigate 0.14 and later, a review item is a time period where any number of tracked objects were active. For example, consider a situation where two people walked past your house. One was walking a dog. At the same time, a car drove by on the street behind them. -In this scenario, Frigate 0.13 and earlier would show 4 events in the UI - one for each person, another for the dog, and yet another for the car. You would have had 4 separate videos to watch even though they would have all overlapped. +In this scenario, Frigate 0.13 and earlier would show 4 "events" in the UI - one for each person, another for the dog, and yet another for the car. You would have had 4 separate videos to watch even though they would have all overlapped. In 0.14 and later, all of that is bundled into a single review item which starts and ends to capture all of that activity. Reviews for a single camera cannot overlap. Once you have watched that time period on that camera, it is marked as reviewed. diff --git a/docs/docs/configuration/semantic_search.md b/docs/docs/configuration/semantic_search.md index bce31f706..a82b9ccca 100644 --- a/docs/docs/configuration/semantic_search.md +++ b/docs/docs/configuration/semantic_search.md @@ -3,13 +3,15 @@ id: semantic_search title: Using Semantic Search --- -The Search feature in Frigate allows you to find tracked objects within your review items using either the image itself, a user-defined text description, or an automatically generated one. This semantic search functionality works by creating _embeddings_ — numerical vector representations — for both the images and text descriptions of your tracked objects. By comparing these embeddings, Frigate assesses their similarities to deliver relevant search results. +Semantic Search in Frigate allows you to find tracked objects within your review items using either the image itself, a user-defined text description, or an automatically generated one. This feature works by creating _embeddings_ — numerical vector representations — for both the images and text descriptions of your tracked objects. By comparing these embeddings, Frigate assesses their similarities to deliver relevant search results. Frigate has support for two models to create embeddings, both of which run locally: [OpenAI CLIP](https://openai.com/research/clip) and [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). Embeddings are then saved to a local instance of [ChromaDB](https://trychroma.com). +Semantic Search is accessed via the _Explore_ view in the Frigate UI. + ## Configuration -Semantic Search is a global configuration setting. +Semantic search is disabled by default, and must be enabled in your config file before it can be used. Semantic Search is a global configuration setting. ```yaml semantic_search: @@ -31,7 +33,7 @@ This model is able to embed both images and text into the same vector space, whi ### all-MiniLM-L6-v2 -This is a sentence embedding model that has been fine tuned on over 1 billion sentence pairs. This model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Search page when clicking on the gray tracked object chip at the top left of each review item. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate event descriptions. +This is a sentence embedding model that has been fine tuned on over 1 billion sentence pairs. This model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Search page when clicking on the gray tracked object chip at the top left of each review item. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate tracked object descriptions. ## Usage diff --git a/docs/docs/configuration/zones.md b/docs/docs/configuration/zones.md index e8a9a49fd..aef6b0a5b 100644 --- a/docs/docs/configuration/zones.md +++ b/docs/docs/configuration/zones.md @@ -64,7 +64,7 @@ cameras: ### Restricting zones to specific objects -Sometimes you want to limit a zone to specific object types to have more granular control of when events/snapshots are saved. The following example will limit one zone to person objects and the other to cars. +Sometimes you want to limit a zone to specific object types to have more granular control of when alerts, detections, and snapshots are saved. The following example will limit one zone to person objects and the other to cars. ```yaml cameras: @@ -80,7 +80,7 @@ cameras: - car ``` -Only car objects can trigger the `front_yard_street` zone and only person can trigger the `entire_yard`. You will get events for person objects that enter anywhere in the yard, and events for cars only if they enter the street. +Only car objects can trigger the `front_yard_street` zone and only person can trigger the `entire_yard`. Objects will be tracked for any `person` that enter anywhere in the yard, and for cars only if they enter the street. ### Zone Loitering diff --git a/docs/docs/frigate/glossary.md b/docs/docs/frigate/glossary.md index 5be40f249..bd039554c 100644 --- a/docs/docs/frigate/glossary.md +++ b/docs/docs/frigate/glossary.md @@ -16,10 +16,6 @@ A box returned from the object detection model that outlines an object in the fr - A gray thin line indicates that object is detected as being stationary - A thick line indicates that object is the subject of autotracking (when enabled). -## Event - -The time period starting when a tracked object entered the frame and ending when it left the frame, including any time that the object remained still. Events are saved when it is considered a [true positive](#threshold) and meets the requirements for a snapshot or recording to be saved. - ## False Positive An incorrect detection of an object type. For example a dog being detected as a person, a chair being detected as a dog, etc. A person being detected in an area you want to ignore is not a false positive. @@ -64,6 +60,10 @@ The threshold is the median score that an object must reach in order to be consi The top score for an object is the highest median score for an object. +## Tracked Object ("event" in previous versions) + +The time period starting when a tracked object entered the frame and ending when it left the frame, including any time that the object remained still. Tracked objects are saved when it is considered a [true positive](#threshold) and meets the requirements for a snapshot or recording to be saved. + ## Zone Zones are areas of interest, zones can be used for notifications and for limiting the areas where Frigate will create an [event](#event). [See the zone docs for more info](/configuration/zones) diff --git a/docs/docs/guides/getting_started.md b/docs/docs/guides/getting_started.md index f18740d85..908e0ce1b 100644 --- a/docs/docs/guides/getting_started.md +++ b/docs/docs/guides/getting_started.md @@ -238,7 +238,7 @@ Now that you know where you need to mask, use the "Mask & Zone creator" in the o :::warning -Note that motion masks should not be used to mark out areas where you do not want objects to be detected or to reduce false positives. They do not alter the image sent to object detection, so you can still get events and detections in areas with motion masks. These only prevent motion in these areas from initiating object detection. +Note that motion masks should not be used to mark out areas where you do not want objects to be detected or to reduce false positives. They do not alter the image sent to object detection, so you can still get tracked objects, alerts, and detections in areas with motion masks. These only prevent motion in these areas from initiating object detection. ::: @@ -302,7 +302,7 @@ If you only plan to use Frigate for recording, it is still recommended to define ::: -By default, Frigate will retain video of all events for 10 days. The full set of options for recording can be found [here](../configuration/reference.md). +By default, Frigate will retain video of all tracked objects for 10 days. The full set of options for recording can be found [here](../configuration/reference.md). ### Step 7: Complete config diff --git a/docs/docs/guides/ha_notifications.md b/docs/docs/guides/ha_notifications.md index cc729af04..a92dab10f 100644 --- a/docs/docs/guides/ha_notifications.md +++ b/docs/docs/guides/ha_notifications.md @@ -7,11 +7,11 @@ The best way to get started with notifications for Frigate is to use the [Bluepr It is generally recommended to trigger notifications based on the `frigate/reviews` mqtt topic. This provides the event_id(s) needed to fetch [thumbnails/snapshots/clips](../integrations/home-assistant.md#notification-api) and other useful information to customize when and where you want to receive alerts. The data is published in the form of a change feed, which means you can reference the "previous state" of the object in the `before` section and the "current state" of the object in the `after` section. You can see an example [here](../integrations/mqtt.md#frigateevents). -Here is a simple example of a notification automation of events which will update the existing notification for each change. This means the image you see in the notification will update as Frigate finds a "better" image. +Here is a simple example of a notification automation of tracked objects which will update the existing notification for each change. This means the image you see in the notification will update as Frigate finds a "better" image. ```yaml automation: - - alias: Notify of events + - alias: Notify of tracked object trigger: platform: mqtt topic: frigate/events diff --git a/docs/docs/integrations/api.md b/docs/docs/integrations/api.md index a0208bc01..edbb3d881 100644 --- a/docs/docs/integrations/api.md +++ b/docs/docs/integrations/api.md @@ -189,15 +189,15 @@ Example parameters: ### `GET /api//