diff --git a/docs/docs/configuration/birdseye.md b/docs/docs/configuration/birdseye.md index 8edf50583..2c9fbbdf4 100644 --- a/docs/docs/configuration/birdseye.md +++ b/docs/docs/configuration/birdseye.md @@ -1,15 +1,20 @@ # Birdseye -Birdseye allows a heads-up view of your cameras to see what is going on around your property / space without having to watch all cameras that may have nothing happening. Birdseye allows specific modes that intelligently show and disappear based on what you care about. +In addition to Frigate's Live camera dashboard, Birdseye allows a portable heads-up view of your cameras to see what is going on around your property / space without having to watch all cameras that may have nothing happening. Birdseye allows specific modes that intelligently show and disappear based on what you care about. + +Birdseye can be viewed by adding the "Birdseye" camera to a Camera Group in the Web UI. Add a Camera Group by pressing the "+" icon on the Live page, and choose "Birdseye" as one of the cameras. + +Birdseye can also be used in HomeAssistant dashboards, cast to media devices, etc. ## Birdseye Behavior ### Birdseye Modes Birdseye offers different modes to customize which cameras show under which circumstances. - - **continuous:** All cameras are always included - - **motion:** Cameras that have detected motion within the last 30 seconds are included - - **objects:** Cameras that have tracked an active object within the last 30 seconds are included + +- **continuous:** All cameras are always included +- **motion:** Cameras that have detected motion within the last 30 seconds are included +- **objects:** Cameras that have tracked an active object within the last 30 seconds are included ### Custom Birdseye Icon @@ -79,7 +84,7 @@ cameras: order: 2 ``` -*Note*: Cameras are sorted by default using their name to ensure a constant view inside Birdseye. +_Note_: Cameras are sorted by default using their name to ensure a constant view inside Birdseye. ### Birdseye Cameras diff --git a/docs/docs/configuration/cameras.md b/docs/docs/configuration/cameras.md index 394cf935a..a941dd895 100644 --- a/docs/docs/configuration/cameras.md +++ b/docs/docs/configuration/cameras.md @@ -80,7 +80,6 @@ This list of working and non-working PTZ cameras is based on user feedback. | Brand or specific camera | PTZ Controls | Autotracking | Notes | | ------------------------ | :----------: | :----------: | ----------------------------------------------------------------------------------------------------------------------------------------------- | | Amcrest | ✅ | ✅ | ⛔️ Generally, Amcrest should work, but some older models (like the common IP2M-841) don't support autotracking | -| Amcrest ASH21 | ❌ | ❌ | No ONVIF support | | Ctronics PTZ | ✅ | ❌ | | | Dahua | ✅ | ✅ | | | Foscam R5 | ✅ | ❌ | | @@ -90,8 +89,7 @@ This list of working and non-working PTZ cameras is based on user feedback. | Reolink E1 Zoom | ✅ | ❌ | | | Reolink RLC-823A 16x | ✅ | ❌ | | | Sunba 405-D20X | ✅ | ❌ | | -| Tapo C200 | ✅ | ❌ | Incomplete ONVIF support | -| Tapo C210 | ❌ | ❌ | Incomplete ONVIF support | +| Tapo | ✅ | ❌ | Most Tapo PTZ cameras support PTZ controls - ONVIF Service Port: 2020 | | Uniview IPC672LR-AX4DUPK | ✅ | ❌ | Firmware says FOV relative movement is supported, but camera doesn't actually move when sending ONVIF commands | | Vikylin PTZ-2804X-I2 | ❌ | ❌ | Incomplete ONVIF support | @@ -113,4 +111,4 @@ camera_groups: - garage_cam icon: car order: 0 -``` \ No newline at end of file +``` diff --git a/docs/docs/configuration/index.md b/docs/docs/configuration/index.md index d2b186cbc..d1e382e40 100644 --- a/docs/docs/configuration/index.md +++ b/docs/docs/configuration/index.md @@ -113,7 +113,7 @@ cameras: - detect motion: mask: - - 0,461,3,0,1919,0,1919,843,1699,492,1344,458,1346,336,973,317,869,375,866,432 + - 0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400 ``` ### Standalone Intel Mini PC with USB Coral @@ -167,7 +167,7 @@ cameras: - detect motion: mask: - - 0,461,3,0,1919,0,1919,843,1699,492,1344,458,1346,336,973,317,869,375,866,432 + - 0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400 ``` ### Home Assistant integrated Intel Mini PC with OpenVino @@ -232,5 +232,5 @@ cameras: - detect motion: mask: - - 0,461,3,0,1919,0,1919,843,1699,492,1344,458,1346,336,973,317,869,375,866,432 + - 0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400 ``` diff --git a/docs/docs/configuration/live.md b/docs/docs/configuration/live.md index 163b16179..42b7be83b 100644 --- a/docs/docs/configuration/live.md +++ b/docs/docs/configuration/live.md @@ -3,11 +3,11 @@ id: live title: Live View --- -Frigate has different live view options, some of which require the bundled `go2rtc` to be configured as shown in the [step by step guide](/guides/configuring_go2rtc). +Frigate intelligently displays your camera streams on the Live view dashboard. Your camera images update once per minute when no detectable activity is occurring to conserve bandwidth and resources. As soon as any motion is detected, cameras seamlessly switch to a live stream. -## Live View Options +## Live View technologies -Live view options can be selected while viewing the live stream. The options are: +Frigate intelligently uses three different streaming technologies to display your camera streams. The highest quality and fluency of the Live view requires the bundled `go2rtc` to be configured as shown in the [step by step guide](/guides/configuring_go2rtc). | Source | Latency | Frame Rate | Resolution | Audio | Requires go2rtc | Other Limitations | | ------ | ------- | ------------------------------------- | -------------- | ---------------------------- | --------------- | ------------------------------------------------ | diff --git a/docs/docs/configuration/masks.md b/docs/docs/configuration/masks.md index ae64e7e5f..838765f49 100644 --- a/docs/docs/configuration/masks.md +++ b/docs/docs/configuration/masks.md @@ -5,7 +5,9 @@ title: Masks ## Motion masks -Motion masks are used to prevent unwanted types of motion from triggering detection. Try watching the debug feed with `Motion Boxes` enabled to see what may be regularly detected as motion. For example, you want to mask out your timestamp, the sky, rooftops, etc. Keep in mind that this mask only prevents motion from being detected and does not prevent objects from being detected if object detection was started due to motion in unmasked areas. Motion is also used during object tracking to refine the object detection area in the next frame. Over masking will make it more difficult for objects to be tracked. To see this effect, create a mask, and then watch the video feed with `Motion Boxes` enabled again. +Motion masks are used to prevent unwanted types of motion from triggering detection. Try watching the Debug feed (Settings --> Debug) with `Motion Boxes` enabled to see what may be regularly detected as motion. For example, you want to mask out your timestamp, the sky, rooftops, etc. Keep in mind that this mask only prevents motion from being detected and does not prevent objects from being detected if object detection was started due to motion in unmasked areas. Motion is also used during object tracking to refine the object detection area in the next frame. _Over-masking will make it more difficult for objects to be tracked._ + +See [further clarification](#further-clarification) below on why you may not want to use a motion mask. ## Object filter masks @@ -20,32 +22,30 @@ Object filter masks can be used to filter out stubborn false positives in fixed To create a poly mask: 1. Visit the Web UI -1. Click the camera you wish to create a mask for -1. Select "Debug" at the top -1. Expand the "Options" below the video feed -1. Click "Mask & Zone creator" -1. Click "Add" on the type of mask or zone you would like to create -1. Click on the camera's latest image to create a masked area. The yaml representation will be updated in real-time -1. When you've finished creating your mask, click "Copy" and paste the contents into your config file and restart Frigate +2. Click/tap the gear icon and open "Settings" +3. Select "Mask / zone editor" +4. At the top right, select the camera you wish to create a mask or zone for +5. Click the plus icon under the type of mask or zone you would like to create +6. Click on the camera's latest image to create the points for a masked area. Click the first point again to close the polygon. +7. When you've finished creating your mask, press Save. +8. Restart Frigate to apply your changes. -Example of a finished row corresponding to the below example image: +Your config file will be updated with the relative coordinates of the mask/zone: ```yaml motion: - mask: "0,461,3,0,1919,0,1919,843,1699,492,1344,458,1346,336,973,317,869,375,866,432" + mask: "0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400" ``` -Multiple masks can be listed. +Multiple masks can be listed in your config. ```yaml motion: mask: - - 458,1346,336,973,317,869,375,866,432 - - 0,461,3,0,1919,0,1919,843,1699,492,1344 + - 0.239,1.246,0.175,0.901,0.165,0.805,0.195,0.802 + - 0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456 ``` -![poly](/img/example-mask-poly-min.png) - ### Further Clarification This is a response to a [question posed on reddit](https://www.reddit.com/r/homeautomation/comments/ppxdve/replacing_my_doorbell_with_a_security_camera_a_6/hd876w4?utm_source=share&utm_medium=web2x&context=3): diff --git a/docs/docs/configuration/reference.md b/docs/docs/configuration/reference.md index 658e6488d..f1929aa42 100644 --- a/docs/docs/configuration/reference.md +++ b/docs/docs/configuration/reference.md @@ -279,7 +279,7 @@ objects: # Optional: mask to prevent all object types from being detected in certain areas (default: no mask) # Checks based on the bottom center of the bounding box of the object. # NOTE: This mask is COMBINED with the object type specific mask below - mask: 0,0,1000,0,1000,200,0,200 + mask: 0.000,0.000,0.781,0.000,0.781,0.278,0.000,0.278 # Optional: filters to reduce false positives for specific object types filters: person: @@ -297,7 +297,7 @@ objects: threshold: 0.7 # Optional: mask to prevent this object type from being detected in certain areas (default: no mask) # Checks based on the bottom center of the bounding box of the object - mask: 0,0,1000,0,1000,200,0,200 + mask: 0.000,0.000,0.781,0.000,0.781,0.278,0.000,0.278 # Optional: Review configuration # NOTE: Can be overridden at the camera level @@ -353,7 +353,7 @@ motion: frame_height: 100 # Optional: motion mask # NOTE: see docs for more detailed info on creating masks - mask: 0,900,1080,900,1080,1920,0,1920 + mask: 0.000,0.469,1.000,0.469,1.000,1.000,0.000,1.000 # Optional: improve contrast (default: shown below) # Enables dynamic contrast improvement. This should help improve night detections at the cost of making motion detection more sensitive # for daytime. @@ -547,7 +547,7 @@ cameras: front_steps: # Required: List of x,y coordinates to define the polygon of the zone. # NOTE: Presence in a zone is evaluated only based on the bottom center of the objects bounding box. - coordinates: 545,1077,747,939,788,805 + coordinates: 0.284,0.997,0.389,0.869,0.410,0.745 # Optional: Number of consecutive frames required for object to be considered present in the zone (default: shown below). inertia: 3 # Optional: Number of seconds that an object must loiter to be considered in the zone (default: shown below) diff --git a/docs/docs/configuration/review.md b/docs/docs/configuration/review.md index 667401d0c..fdf8c5259 100644 --- a/docs/docs/configuration/review.md +++ b/docs/docs/configuration/review.md @@ -3,7 +3,13 @@ id: review title: Review --- -Review items are saved as periods of time where frigate detected events. After watching the preview of a review item it is marked as reviewed. +The Review page of the Frigate UI is for quickly reviewing historical footage of interest from your cameras. _Review items_ are indicated on a vertical timeline and displayed as a grid of previews - bandwidth-optimized, low frame rate, low resolution videos. Hovering over or swiping a preview plays the video and marks it as reviewed. If more in-depth analysis is required, the preview can be clicked/tapped and the full frame rate, full resolution recording is displayed. + +Review items are filterable by date, object type, and camera. + +## Alerts and Detections + +Not every segment of video captured by Frigate may be of the same level of interest to you. Video of people who enter your property may be a different priority than those walking by on the sidewalk. For this reason, Frigate 0.14 categorizes review items as _alerts_ and _detections_. By default, all person and car objects are considered alerts. You can refine categorization of your review items by configuring required zones for them. ## Restricting alerts to specific labels diff --git a/docs/docs/configuration/snapshots.md b/docs/docs/configuration/snapshots.md index dce689a67..50a6c5652 100644 --- a/docs/docs/configuration/snapshots.md +++ b/docs/docs/configuration/snapshots.md @@ -5,6 +5,8 @@ title: Snapshots Frigate can save a snapshot image to `/media/frigate/clips` for each object that is detected named as `-.jpg`. They are also accessible [via the api](../integrations/api.md#get-apieventsidsnapshotjpg) +For users with Frigate+ enabled, snapshots are accessible in the UI in the Frigate+ pane to allow for quick submission to the Frigate+ service. + To only save snapshots for objects that enter a specific zone, [see the zone docs](./zones.md#restricting-snapshots-to-specific-zones) -Snapshots sent via MQTT are configured in the [config file](https://docs.frigate.video/configuration/) under `cameras -> your_camera -> mqtt` +Snapshots sent via MQTT are configured in the [config file](https://docs.frigate.video/configuration/) under `cameras -> your_camera -> mqtt` diff --git a/docs/docs/configuration/user_interface.md b/docs/docs/configuration/user_interface.md deleted file mode 100644 index 72ce5a5d6..000000000 --- a/docs/docs/configuration/user_interface.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -id: user_interface -title: User Interface Configurations ---- - -### Experimental UI - -While developing and testing new components, users may decide to opt-in to test potential new features on the front-end. - -```yaml -ui: - use_experimental: true -``` - -Note that experimental changes may contain bugs or may be removed at any time in future releases of the software. Use of these features are presented as-is and with no functional guarantee. diff --git a/docs/docs/configuration/zones.md b/docs/docs/configuration/zones.md index 23fcea986..de60fcd67 100644 --- a/docs/docs/configuration/zones.md +++ b/docs/docs/configuration/zones.md @@ -10,7 +10,7 @@ For example, the cat in this image is currently in Zone 1, but **not** Zone 2. Zones cannot have the same name as a camera. If desired, a single zone can include multiple cameras if you have multiple cameras covering the same area by configuring zones with the same name for each camera. -During testing, enable the Zones option for the debug feed so you can adjust as needed. The zone line will increase in thickness when any object enters the zone. +During testing, enable the Zones option for the Debug view of your camera (Settings --> Debug) so you can adjust as needed. The zone line will increase in thickness when any object enters the zone. To create a zone, follow [the steps for a "Motion mask"](masks.md), but use the section of the web UI for creating a zone instead. @@ -47,7 +47,6 @@ cameras: coordinates: ... inner_yard: coordinates: ... - ``` ### Restricting snapshots to specific zones @@ -131,3 +130,17 @@ cameras: objects: - car ``` + +### Loitering Time + +Zones support a `loitering_time` configuration which can be used to only consider an object as part of a zone if they loiter in the zone for the specified number of seconds. This can be used, for example, to create alerts for cars that stop on the street but not cars that just drive past your camera. + +```yaml +cameras: + name_of_your_camera: + zones: + front_yard: + loitering_time: 5 # unit is in seconds + objects: + - person +``` diff --git a/docs/docs/frigate/camera_setup.md b/docs/docs/frigate/camera_setup.md index 0e53b4809..33ae24cab 100644 --- a/docs/docs/frigate/camera_setup.md +++ b/docs/docs/frigate/camera_setup.md @@ -5,9 +5,9 @@ title: Camera setup Cameras configured to output H.264 video and AAC audio will offer the most compatibility with all features of Frigate and Home Assistant. H.265 has better compression, but less compatibility. Chrome 108+, Safari and Edge are the only browsers able to play H.265 and only support a limited number of H.265 profiles. Ideally, cameras should be configured directly for the desired resolutions and frame rates you want to use in Frigate. Reducing frame rates within Frigate will waste CPU resources decoding extra frames that are discarded. There are three different goals that you want to tune your stream configurations around. -- **Detection**: This is the only stream that Frigate will decode for processing. Also, this is the stream where snapshots will be generated from. The resolution for detection should be tuned for the size of the objects you want to detect. See [Choosing a detect resolution](#choosing-a-detect-resolution) for more details. The recommended frame rate is 5fps, but may need to be higher for very fast moving objects. Higher resolutions and frame rates will drive higher CPU usage on your server. +- **Detection**: This is the only stream that Frigate will decode for processing. Also, this is the stream where snapshots will be generated from. The resolution for detection should be tuned for the size of the objects you want to detect. See [Choosing a detect resolution](#choosing-a-detect-resolution) for more details. The recommended frame rate is 5fps, but may need to be higher (10fps is the recommended maximum for most users) for very fast moving objects. Higher resolutions and frame rates will drive higher CPU usage on your server. -- **Recording**: This stream should be the resolution you wish to store for reference. Typically, this will be the highest resolution your camera supports. I recommend setting this feed to 15 fps. +- **Recording**: This stream should be the resolution you wish to store for reference. Typically, this will be the highest resolution your camera supports. I recommend setting this feed in your camera's firmware to 15 fps. - **Stream Viewing**: This stream will be rebroadcast as is to Home Assistant for viewing with the stream component. Setting this resolution too high will use significant bandwidth when viewing streams in Home Assistant, and they may not load reliably over slower connections. diff --git a/docs/docs/frigate/index.md b/docs/docs/frigate/index.md index 08d8f1de6..73b3305e7 100644 --- a/docs/docs/frigate/index.md +++ b/docs/docs/frigate/index.md @@ -20,6 +20,10 @@ Use of a [Google Coral Accelerator](https://coral.ai/products/) is optional, but ## Screenshots +![Live View](/img/live-view.png) + +![Review Items](/img/review-items.png) + ![Media Browser](/img/media_browser-min.png) ![Notification](/img/notification-min.png) diff --git a/docs/static/img/live-view.png b/docs/static/img/live-view.png new file mode 100644 index 000000000..6bc5eed76 Binary files /dev/null and b/docs/static/img/live-view.png differ diff --git a/docs/static/img/review-items.png b/docs/static/img/review-items.png new file mode 100644 index 000000000..641813b96 Binary files /dev/null and b/docs/static/img/review-items.png differ