mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-08-13 13:47:36 +02:00
Compare commits
19 Commits
Author | SHA1 | Date | |
---|---|---|---|
|
334b6670e1 | ||
|
b5067c07f8 | ||
|
21e9b2f2ce | ||
|
4a94b43e52 | ||
|
3bda638678 | ||
|
687e118b58 | ||
|
95daf0ba05 | ||
|
213dc97c17 | ||
|
f29cf43f52 | ||
|
aabd5b0077 | ||
|
460e291bf1 | ||
|
ee51326d35 | ||
|
948b087d3c | ||
|
77589c18f4 | ||
|
6a62467998 | ||
|
6857cc2b97 | ||
|
37618b0f57 | ||
|
e7f6e069f6 | ||
|
ee4767b1ce |
2
Makefile
2
Makefile
@ -1,7 +1,7 @@
|
||||
default_target: local
|
||||
|
||||
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
|
||||
VERSION = 0.15.0
|
||||
VERSION = 0.15.2
|
||||
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
|
||||
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
|
||||
BOARDS= #Initialized empty
|
||||
|
@ -105,6 +105,12 @@ genai:
|
||||
model: gemini-1.5-flash
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
To use a different Gemini-compatible API endpoint, set the `GEMINI_BASE_URL` environment variable to your provider's API URL.
|
||||
|
||||
:::
|
||||
|
||||
## OpenAI
|
||||
|
||||
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
|
||||
|
@ -20,5 +20,5 @@ In order to install Frigate as a PWA, the following requirements must be met:
|
||||
Installation varies slightly based on the device that is being used:
|
||||
|
||||
- Desktop: Use the install button typically found in right edge of the address bar
|
||||
- Android: Use the `Install as App` button in the more options menu
|
||||
- iOS: Use the `Add to Homescreen` button in the share menu
|
||||
- Android: Use the `Install as App` button in the more options menu for Chrome, and the `Add app to Home screen` button for Firefox
|
||||
- iOS: Use the `Add to Homescreen` button in the share menu
|
||||
|
@ -36,8 +36,8 @@ Note that certbot uses symlinks, and those can't be followed by the container un
|
||||
frigate:
|
||||
...
|
||||
volumes:
|
||||
- /etc/letsencrypt/live/frigate:/etc/letsencrypt/live/frigate:ro
|
||||
- /etc/letsencrypt/archive/frigate:/etc/letsencrypt/archive/frigate:ro
|
||||
- /etc/letsencrypt/live/your.fqdn.net:/etc/letsencrypt/live/frigate:ro
|
||||
- /etc/letsencrypt/archive/your.fqdn.net:/etc/letsencrypt/archive/your.fqdn.net:ro
|
||||
...
|
||||
|
||||
```
|
||||
|
@ -3,7 +3,7 @@ id: camera_setup
|
||||
title: Camera setup
|
||||
---
|
||||
|
||||
Cameras configured to output H.264 video and AAC audio will offer the most compatibility with all features of Frigate and Home Assistant. H.265 has better compression, but less compatibility. Chrome 108+, Safari and Edge are the only browsers able to play H.265 and only support a limited number of H.265 profiles. Ideally, cameras should be configured directly for the desired resolutions and frame rates you want to use in Frigate. Reducing frame rates within Frigate will waste CPU resources decoding extra frames that are discarded. There are three different goals that you want to tune your stream configurations around.
|
||||
Cameras configured to output H.264 video and AAC audio will offer the most compatibility with all features of Frigate and Home Assistant. H.265 has better compression, but less compatibility. Firefox 134+/136+/137+ (Windows/Mac/Linux & Android), Chrome 108+, Safari and Edge are the only browsers able to play H.265 and only support a limited number of H.265 profiles. Ideally, cameras should be configured directly for the desired resolutions and frame rates you want to use in Frigate. Reducing frame rates within Frigate will waste CPU resources decoding extra frames that are discarded. There are three different goals that you want to tune your stream configurations around.
|
||||
|
||||
- **Detection**: This is the only stream that Frigate will decode for processing. Also, this is the stream where snapshots will be generated from. The resolution for detection should be tuned for the size of the objects you want to detect. See [Choosing a detect resolution](#choosing-a-detect-resolution) for more details. The recommended frame rate is 5fps, but may need to be higher (10fps is the recommended maximum for most users) for very fast moving objects. Higher resolutions and frame rates will drive higher CPU usage on your server.
|
||||
|
||||
|
@ -66,4 +66,4 @@ The time period starting when a tracked object entered the frame and ending when
|
||||
|
||||
## Zone
|
||||
|
||||
Zones are areas of interest, zones can be used for notifications and for limiting the areas where Frigate will create an [event](#event). [See the zone docs for more info](/configuration/zones)
|
||||
Zones are areas of interest, zones can be used for notifications and for limiting the areas where Frigate will create a [review item](#review-item). [See the zone docs for more info](/configuration/zones)
|
||||
|
@ -9,9 +9,11 @@ Cameras that output H.264 video and AAC audio will offer the most compatibility
|
||||
|
||||
I recommend Dahua, Hikvision, and Amcrest in that order. Dahua edges out Hikvision because they are easier to find and order, not because they are better cameras. I personally use Dahua cameras because they are easier to purchase directly. In my experience Dahua and Hikvision both have multiple streams with configurable resolutions and frame rates and rock solid streams. They also both have models with large sensors well known for excellent image quality at night. Not all the models are equal. Larger sensors are better than higher resolutions; especially at night. Amcrest is the fallback recommendation because they are rebranded Dahuas. They are rebranding the lower end models with smaller sensors or less configuration options.
|
||||
|
||||
Many users have reported various issues with Reolink cameras, so I do not recommend them. If you are using Reolink, I suggest the [Reolink specific configuration](../configuration/camera_specific.md#reolink-cameras). Wifi cameras are also not recommended. Their streams are less reliable and cause connection loss and/or lost video data.
|
||||
WiFi cameras are not recommended as [their streams are less reliable and cause connection loss and/or lost video data](https://ipcamtalk.com/threads/camera-conflicts.68142/#post-738821), especially when more than a few WiFi cameras will be used at the same time.
|
||||
|
||||
Here are some of the camera's I recommend:
|
||||
Many users have reported various issues with 4K-plus Reolink cameras, it is best to stick with 5MP and lower for Reolink cameras. If you are using Reolink, I suggest the [Reolink specific configuration](../configuration/camera_specific.md#reolink-cameras).
|
||||
|
||||
Here are some of the cameras I recommend:
|
||||
|
||||
- <a href="https://amzn.to/4fwoNWA" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) IPC-T549M-ALED-S3</a> (affiliate link)
|
||||
- <a href="https://amzn.to/3YXpcMw" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) IPC-T54IR-AS</a> (affiliate link)
|
||||
@ -150,4 +152,4 @@ Basically - When you increase the resolution and/or the frame rate of the stream
|
||||
|
||||
YES! The Coral does not help with decoding video streams.
|
||||
|
||||
Decompressing video streams takes a significant amount of CPU power. Video compression uses key frames (also known as I-frames) to send a full frame in the video stream. The following frames only include the difference from the key frame, and the CPU has to compile each frame by merging the differences with the key frame. [More detailed explanation](https://blog.video.ibm.com/streaming-video-tips/keyframes-interframe-video-compression/). Higher resolutions and frame rates mean more processing power is needed to decode the video stream, so try and set them on the camera to avoid unnecessary decoding work.
|
||||
Decompressing video streams takes a significant amount of CPU power. Video compression uses key frames (also known as I-frames) to send a full frame in the video stream. The following frames only include the difference from the key frame, and the CPU has to compile each frame by merging the differences with the key frame. [More detailed explanation](https://support.video.ibm.com/hc/en-us/articles/18106203580316-Keyframes-InterFrame-Video-Compression). Higher resolutions and frame rates mean more processing power is needed to decode the video stream, so try and set them on the camera to avoid unnecessary decoding work.
|
||||
|
@ -187,7 +187,6 @@ Next, you should configure [hardware object detection](/configuration/object_det
|
||||
Running in Docker with compose is the recommended install method.
|
||||
|
||||
```yaml
|
||||
version: "3.9"
|
||||
services:
|
||||
frigate:
|
||||
container_name: frigate
|
||||
|
74
docs/docs/frigate/planning_setup.md
Normal file
74
docs/docs/frigate/planning_setup.md
Normal file
@ -0,0 +1,74 @@
|
||||
---
|
||||
id: planning_setup
|
||||
title: Planning a New Installation
|
||||
---
|
||||
|
||||
Choosing the right hardware for your Frigate NVR setup is important for optimal performance and a smooth experience. This guide will walk you through the key considerations, focusing on the number of cameras and the hardware required for efficient object detection.
|
||||
|
||||
## Key Considerations
|
||||
|
||||
### Number of Cameras and Simultaneous Activity
|
||||
|
||||
The most fundamental factor in your hardware decision is the number of cameras you plan to use. However, it's not just about the raw count; it's also about how many of those cameras are likely to see activity and require object detection simultaneously.
|
||||
|
||||
When motion is detected in a camera's feed, regions of that frame are sent to your chosen [object detection hardware](/configuration/object_detectors).
|
||||
|
||||
- **Low Simultaneous Activity (1-6 cameras with occasional motion)**: If you have a few cameras in areas with infrequent activity (e.g., a seldom-used backyard, a quiet interior), the demand on your object detection hardware will be lower. A single, entry-level AI accelerator will suffice.
|
||||
- **Moderate Simultaneous Activity (6-12 cameras with some overlapping motion)**: For setups with more cameras, especially in areas like a busy street or a property with multiple access points, it's more likely that several cameras will capture activity at the same time. This increases the load on your object detection hardware, requiring more processing power.
|
||||
- **High Simultaneous Activity (12+ cameras or highly active zones)**: Large installations or scenarios where many cameras frequently capture activity (e.g., busy street with overview, identification, dedicated LPR cameras, etc.) will necessitate robust object detection capabilities. You'll likely need multiple entry-level AI accelerators or a more powerful single unit such as a discrete GPU.
|
||||
- **Commercial Installations (40+ cameras)**: Commercial installations or scenarios where a substantial number of cameras capture activity (e.g., a commercial property, an active public space) will necessitate robust object detection capabilities. You'll likely need a modern discrete GPU.
|
||||
|
||||
### Video Decoding
|
||||
|
||||
Modern CPUs with integrated GPUs (Intel Quick Sync, AMD VCN) or dedicated GPUs can significantly offload video decoding from the main CPU, freeing up resources. This is highly recommended, especially for multiple cameras.
|
||||
|
||||
:::tip
|
||||
|
||||
For commercial installations it is important to verify the number of supported concurrent streams on your GPU, many consumer GPUs max out at ~20 concurrent camera streams.
|
||||
|
||||
:::
|
||||
|
||||
## Hardware Considerations
|
||||
|
||||
### Object Detection
|
||||
|
||||
There are many different hardware options for object detection depending on priorities and available hardware. See [the recommended hardware page](./hardware.md#detectors) for more specifics on what hardware is recommended for object detection.
|
||||
|
||||
### Storage
|
||||
|
||||
Storage is an important consideration when planning a new installation. To get a more precise estimate of your storage requirements, you can use an IP camera storage calculator. Websites like [IPConfigure Storage Calculator](https://calculator.ipconfigure.com/) can help you determine the necessary disk space based on your camera settings.
|
||||
|
||||
|
||||
#### SSDs (Solid State Drives)
|
||||
|
||||
SSDs are an excellent choice for Frigate, offering high speed and responsiveness. The older concern that SSDs would quickly "wear out" from constant video recording is largely no longer valid for modern consumer and enterprise-grade SSDs.
|
||||
|
||||
- Longevity: Modern SSDs are designed with advanced wear-leveling algorithms and significantly higher "Terabytes Written" (TBW) ratings than earlier models. For typical home NVR use, a good quality SSD will likely outlast the useful life of your NVR hardware itself.
|
||||
- Performance: SSDs excel at handling the numerous small write operations that occur during continuous video recording and can significantly improve the responsiveness of the Frigate UI and clip retrieval.
|
||||
- Silence and Efficiency: SSDs produce no noise and consume less power than traditional HDDs.
|
||||
|
||||
#### HDDs (Hard Disk Drives)
|
||||
|
||||
Traditional Hard Disk Drives (HDDs) remain a great and often more cost-effective option for long-term video storage, especially for larger setups where raw capacity is prioritized.
|
||||
|
||||
- Cost-Effectiveness: HDDs offer the best cost per gigabyte, making them ideal for storing many days, weeks, or months of continuous footage.
|
||||
- Capacity: HDDs are available in much larger capacities than most consumer SSDs, which is beneficial for extensive video archives.
|
||||
- NVR-Rated Drives: If choosing an HDD, consider drives specifically designed for surveillance (NVR) use, such as Western Digital Purple or Seagate SkyHawk. These drives are engineered for 24/7 operation and continuous write workloads, offering improved reliability compared to standard desktop drives.
|
||||
|
||||
Determining Your Storage Needs
|
||||
The amount of storage you need will depend on several factors:
|
||||
|
||||
- Number of Cameras: More cameras naturally require more space.
|
||||
- Resolution and Framerate: Higher resolution (e.g., 4K) and higher framerate (e.g., 30fps) streams consume significantly more storage.
|
||||
- Recording Method: Continuous recording uses the most space. motion-only recording or object-triggered recording can save space, but may miss some footage.
|
||||
- Retention Period: How many days, weeks, or months of footage do you want to keep?
|
||||
|
||||
#### Network Storage (NFS/SMB)
|
||||
|
||||
While supported, using network-attached storage (NAS) for recordings can introduce latency and network dependency considerations. For optimal performance and reliability, it is generally recommended to have local storage for your Frigate recordings. If using a NAS, ensure your network connection to it is robust and fast (Gigabit Ethernet at minimum) and that the NAS itself can handle the continuous write load.
|
||||
|
||||
### RAM (Memory)
|
||||
|
||||
- **Basic Minimum: 4GB RAM**: This is generally sufficient for a very basic Frigate setup with a few cameras and a dedicated object detection accelerator, without running any enrichments. Performance might be tight, especially with higher resolution streams or numerous detections.
|
||||
- **Minimum for Enrichments: 8GB RAM**: If you plan to utilize Frigate's enrichment features (e.g., facial recognition, license plate recognition, or other AI models that run alongside standard object detection), 8GB of RAM should be considered the minimum. Enrichments require additional memory to load and process their respective models and data.
|
||||
- **Recommended: 16GB RAM**: For most users, especially those with many cameras (8+) or who plan to heavily leverage enrichments, 16GB of RAM is highly recommended. This provides ample headroom for smooth operation, reduces the likelihood of swapping to disk (which can impact performance), and allows for future expansion.
|
@ -56,7 +56,7 @@ If you’re running Frigate via Docker (recommended method), follow these steps:
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
- If using `docker run`, re-run your original command (e.g., from the [Installation](#docker) section) with the updated image tag.
|
||||
- If using `docker run`, re-run your original command (e.g., from the [Installation](./installation.md#docker) section) with the updated image tag.
|
||||
|
||||
4. **Verify the Update**:
|
||||
- Check the container logs to ensure Frigate starts successfully:
|
||||
|
@ -35,6 +35,7 @@ There are many solutions available to implement reverse proxies and the communit
|
||||
* [Apache2](#apache2-reverse-proxy)
|
||||
* [Nginx](#nginx-reverse-proxy)
|
||||
* [Traefik](#traefik-reverse-proxy)
|
||||
* [Caddy](#caddy-reverse-proxy)
|
||||
|
||||
## Apache2 Reverse Proxy
|
||||
|
||||
@ -117,7 +118,8 @@ server {
|
||||
set $port 8971;
|
||||
|
||||
listen 80;
|
||||
listen 443 ssl http2;
|
||||
listen 443 ssl;
|
||||
http2 on;
|
||||
|
||||
server_name frigate.domain.com;
|
||||
}
|
||||
@ -177,3 +179,33 @@ The above configuration will create a "service" in Traefik, automatically adding
|
||||
It will also add a router, routing requests to "traefik.example.com" to your local container.
|
||||
|
||||
Note that with this approach, you don't need to expose any ports for the Frigate instance since all traffic will be routed over the internal Docker network.
|
||||
|
||||
## Caddy Reverse Proxy
|
||||
|
||||
This example shows Frigate running under a subdomain with logging and a tls cert (in this case a wildcard domain cert obtained independently of caddy) handled via imports
|
||||
|
||||
```caddy
|
||||
(logging) {
|
||||
log {
|
||||
output file /var/log/caddy/{args[0]}.log {
|
||||
roll_size 10MiB
|
||||
roll_keep 5
|
||||
roll_keep_for 10d
|
||||
}
|
||||
format json
|
||||
level INFO
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
(tls) {
|
||||
tls /var/lib/caddy/wildcard.YOUR_DOMAIN.TLD.fullchain.pem /var/lib/caddy/wildcard.YOUR_DOMAIN.TLD.privkey.pem
|
||||
}
|
||||
|
||||
frigate.YOUR_DOMAIN.TLD {
|
||||
reverse_proxy http://localhost:8971
|
||||
import tls
|
||||
import logging frigate.YOUR_DOMAIN.TLD
|
||||
}
|
||||
|
||||
```
|
||||
|
@ -43,7 +43,7 @@ Snapshots must be enabled to be able to submit examples to Frigate+
|
||||
|
||||
### Annotate and verify
|
||||
|
||||
You can view all of your submitted images at [https://plus.frigate.video](https://plus.frigate.video). Annotations can be added by clicking an image. For more detailed information about labeling, see the documentation on [improving your model](../plus/improving_model.md).
|
||||
You can view all of your submitted images at [https://plus.frigate.video](https://plus.frigate.video). Annotations can be added by clicking an image. For more detailed information about labeling, see the documentation on [annotating](../plus/annotating.md).
|
||||
|
||||

|
||||
|
||||
|
@ -1,17 +1,9 @@
|
||||
---
|
||||
id: improving_model
|
||||
title: Improving your model
|
||||
id: annotating
|
||||
title: Annotating your images
|
||||
---
|
||||
|
||||
You may find that Frigate+ models result in more false positives initially, but by submitting true and false positives, the model will improve. With all the new images now being submitted by subscribers, future base models will improve as more and more examples are incorporated. Note that only images with at least one verified label will be used when training your model. Submitting an image from Frigate as a true or false positive will not verify the image. You still must verify the image in Frigate+ in order for it to be used in training.
|
||||
|
||||
- **Submit both true positives and false positives**. This will help the model differentiate between what is and isn't correct. You should aim for a target of 80% true positive submissions and 20% false positives across all of your images. If you are experiencing false positives in a specific area, submitting true positives for any object type near that area in similar lighting conditions will help teach the model what that area looks like when no objects are present.
|
||||
- **Lower your thresholds a little in order to generate more false/true positives near the threshold value**. For example, if you have some false positives that are scoring at 68% and some true positives scoring at 72%, you can try lowering your threshold to 65% and submitting both true and false positives within that range. This will help the model learn and widen the gap between true and false positive scores.
|
||||
- **Submit diverse images**. For the best results, you should provide at least 100 verified images per camera. Keep in mind that varying conditions should be included. You will want images from cloudy days, sunny days, dawn, dusk, and night. As circumstances change, you may need to submit new examples to address new types of false positives. For example, the change from summer days to snowy winter days or other changes such as a new grill or patio furniture may require additional examples and training.
|
||||
|
||||
## Properly labeling images
|
||||
|
||||
For the best results, follow the following guidelines.
|
||||
For the best results, follow these guidelines. You may also want to review the documentation on [improving your model](./index.md#improving-your-model).
|
||||
|
||||
**Label every object in the image**: It is important that you label all objects in each image before verifying. If you don't label a car for example, the model will be taught that part of the image is _not_ a car and it will start to get confused. You can exclude labels that you don't want detected on any of your cameras.
|
||||
|
||||
@ -25,9 +17,17 @@ For the best results, follow the following guidelines.
|
||||
|
||||

|
||||
|
||||
## AI suggested labels
|
||||
|
||||
If you have an active Frigate+ subscription, new uploads will be scanned for the objects configured for you camera and you will see suggested labels as light blue boxes when annotating in Frigate+. These suggestions are processed via a queue and typically complete within a minute after uploading, but processing times can be longer.
|
||||
|
||||

|
||||
|
||||
Suggestions are converted to labels when saving, so you should remove any errant suggestions. There is already some logic designed to avoid duplicate labels, but you may still occasionally see some duplicate suggestions. You should keep the most accurate bounding box and delete any duplicates so that you have just one label per object remaining.
|
||||
|
||||
## False positive labels
|
||||
|
||||
False positives will be shown with a read box and the label will have a strike through.
|
||||
False positives will be shown with a read box and the label will have a strike through. These can't be adjusted, but they can be deleted if you accidentally submit a true positive as a false positive from Frigate.
|
||||

|
||||
|
||||
Misidentified objects should have a correct label added. For example, if a person was mistakenly detected as a cat, you should submit it as a false positive in Frigate and add a label for the person. The boxes will overlap.
|
@ -9,11 +9,11 @@ Before requesting your first model, you will need to upload and verify at least
|
||||
|
||||
It is recommended to submit **both** true positives and false positives. This will help the model differentiate between what is and isn't correct. You should aim for a target of 80% true positive submissions and 20% false positives across all of your images. If you are experiencing false positives in a specific area, submitting true positives for any object type near that area in similar lighting conditions will help teach the model what that area looks like when no objects are present.
|
||||
|
||||
For more detailed recommendations, you can refer to the docs on [improving your model](./improving_model.md).
|
||||
For more detailed recommendations, you can refer to the docs on [annotating](./annotating.md).
|
||||
|
||||
## Step 2: Submit a model request
|
||||
|
||||
Once you have an initial set of verified images, you can request a model on the Models page. For guidance on choosing a model type, refer to [this part of the documentation](./index.md#available-model-types). Each model request requires 1 of the 12 trainings that you receive with your annual subscription. This model will support all [label types available](./index.md#available-label-types) even if you do not submit any examples for those labels. Model creation can take up to 36 hours.
|
||||
Once you have an initial set of verified images, you can request a model on the Models page. For guidance on choosing a model type, refer to [this part of the documentation](./index.md#available-model-types). If you are unsure which type to request, you can test the base model for each version from the "Base Models" tab. Each model request requires 1 of the 12 trainings that you receive with your annual subscription. This model will support all [label types available](./index.md#available-label-types) even if you do not submit any examples for those labels. Model creation can take up to 36 hours.
|
||||

|
||||
|
||||
## Step 3: Set your model id in the config
|
||||
|
@ -3,15 +3,9 @@ id: index
|
||||
title: Models
|
||||
---
|
||||
|
||||
<a href="https://frigate.video/plus" target="_blank" rel="nofollow">Frigate+</a> offers models trained on images submitted by Frigate+ users from their security cameras and is specifically designed for the way Frigate NVR analyzes video footage. These models offer higher accuracy with less resources. The images you upload are used to fine tune a baseline model trained from images uploaded by all Frigate+ users. This fine tuning process results in a model that is optimized for accuracy in your specific conditions.
|
||||
<a href="https://frigate.video/plus" target="_blank" rel="nofollow">Frigate+</a> offers models trained on images submitted by Frigate+ users from their security cameras and is specifically designed for the way Frigate NVR analyzes video footage. These models offer higher accuracy with less resources. The images you upload are used to fine tune a base model trained from images uploaded by all Frigate+ users. This fine tuning process results in a model that is optimized for accuracy in your specific conditions.
|
||||
|
||||
:::info
|
||||
|
||||
The baseline model isn't directly available after subscribing. This may change in the future, but for now you will need to submit a model request with the minimum number of images.
|
||||
|
||||
:::
|
||||
|
||||
With a subscription, 12 model trainings per year are included. If you cancel your subscription, you will retain access to any trained models. An active subscription is required to submit model requests or purchase additional trainings.
|
||||
With a subscription, 12 model trainings to fine tune your model per year are included. In addition, you will have access to any base models published while your subscription is active. If you cancel your subscription, you will retain access to any trained and base models in your account. An active subscription is required to submit model requests or purchase additional trainings. New base models are published quarterly with target dates of January 15th, April 15th, July 15th, and October 15th.
|
||||
|
||||
Information on how to integrate Frigate+ with Frigate can be found in the [integration docs](../integrations/plus.md).
|
||||
|
||||
@ -19,7 +13,7 @@ Information on how to integrate Frigate+ with Frigate can be found in the [integ
|
||||
|
||||
There are two model types offered in Frigate+, `mobiledet` and `yolonas`. Both of these models are object detection models and are trained to detect the same set of labels [listed below](#available-label-types).
|
||||
|
||||
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types).
|
||||
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types). You can test model types for compatibility and speed on your hardware by using the base models.
|
||||
|
||||
| Model Type | Description |
|
||||
| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
@ -36,28 +30,46 @@ Using Frigate+ models with `onnx` and `rocm` is only available with Frigate 0.15
|
||||
|
||||
:::
|
||||
|
||||
| Hardware | Recommended Detector Type | Recommended Model Type |
|
||||
| ---------------------------------------------------------------------------------------------------------------------------- | ------------------------- | ---------------------- |
|
||||
| [CPU](/configuration/object_detectors.md#cpu-detector-not-recommended) | `cpu` | `mobiledet` |
|
||||
| [Coral (all form factors)](/configuration/object_detectors.md#edge-tpu-detector) | `edgetpu` | `mobiledet` |
|
||||
| [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolonas` |
|
||||
| [NVidia GPU](https://deploy-preview-13787--frigate-docs.netlify.app/configuration/object_detectors#onnx)\* | `onnx` | `yolonas` |
|
||||
| [AMD ROCm GPU](https://deploy-preview-13787--frigate-docs.netlify.app/configuration/object_detectors#amdrocm-gpu-detector)\* | `rocm` | `yolonas` |
|
||||
| Hardware | Recommended Detector Type | Recommended Model Type |
|
||||
| -------------------------------------------------------------------------------- | ------------------------- | ---------------------- |
|
||||
| [CPU](/configuration/object_detectors.md#cpu-detector-not-recommended) | `cpu` | `mobiledet` |
|
||||
| [Coral (all form factors)](/configuration/object_detectors.md#edge-tpu-detector) | `edgetpu` | `mobiledet` |
|
||||
| [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolonas` |
|
||||
| [NVidia GPU](/configuration/object_detectors#onnx)\* | `onnx` | `yolonas` |
|
||||
| [AMD ROCm GPU](/configuration/object_detectors#amdrocm-gpu-detector)\* | `rocm` | `yolonas` |
|
||||
|
||||
_\* Requires Frigate 0.15_
|
||||
|
||||
## Improving your model
|
||||
|
||||
Some users may find that Frigate+ models result in more false positives initially, but by submitting true and false positives, the model will improve. With all the new images now being submitted by subscribers, future base models will improve as more and more examples are incorporated. Note that only images with at least one verified label will be used when training your model. Submitting an image from Frigate as a true or false positive will not verify the image. You still must verify the image in Frigate+ in order for it to be used in training.
|
||||
|
||||
- **Submit both true positives and false positives**. This will help the model differentiate between what is and isn't correct. You should aim for a target of 80% true positive submissions and 20% false positives across all of your images. If you are experiencing false positives in a specific area, submitting true positives for any object type near that area in similar lighting conditions will help teach the model what that area looks like when no objects are present.
|
||||
- **Lower your thresholds a little in order to generate more false/true positives near the threshold value**. For example, if you have some false positives that are scoring at 68% and some true positives scoring at 72%, you can try lowering your threshold to 65% and submitting both true and false positives within that range. This will help the model learn and widen the gap between true and false positive scores.
|
||||
- **Submit diverse images**. For the best results, you should provide at least 100 verified images per camera. Keep in mind that varying conditions should be included. You will want images from cloudy days, sunny days, dawn, dusk, and night. As circumstances change, you may need to submit new examples to address new types of false positives. For example, the change from summer days to snowy winter days or other changes such as a new grill or patio furniture may require additional examples and training.
|
||||
|
||||
## Available label types
|
||||
|
||||
Frigate+ models support a more relevant set of objects for security cameras. Currently, the following objects are supported:
|
||||
Frigate+ models support a more relevant set of objects for security cameras. The labels for annotation in Frigate+ are configurable by editing the camera in the Cameras section of Frigate+. Currently, the following objects are supported:
|
||||
|
||||
- **People**: `person`, `face`
|
||||
- **Vehicles**: `car`, `motorcycle`, `bicycle`, `boat`, `license_plate`
|
||||
- **Delivery Logos**: `amazon`, `usps`, `ups`, `fedex`, `dhl`, `an_post`, `purolator`, `postnl`, `nzpost`, `postnord`, `gls`, `dpd`
|
||||
- **Animals**: `dog`, `cat`, `deer`, `horse`, `bird`, `raccoon`, `fox`, `bear`, `cow`, `squirrel`, `goat`, `rabbit`
|
||||
- **Vehicles**: `car`, `motorcycle`, `bicycle`, `boat`, `school_bus`, `license_plate`
|
||||
- **Delivery Logos**: `amazon`, `usps`, `ups`, `fedex`, `dhl`, `an_post`, `purolator`, `postnl`, `nzpost`, `postnord`, `gls`, `dpd`, `canada_post`, `royal_mail`
|
||||
- **Animals**: `dog`, `cat`, `deer`, `horse`, `bird`, `raccoon`, `fox`, `bear`, `cow`, `squirrel`, `goat`, `rabbit`, `skunk`, `kangaroo`
|
||||
- **Other**: `package`, `waste_bin`, `bbq_grill`, `robot_lawnmower`, `umbrella`
|
||||
|
||||
Other object types available in the default Frigate model are not available. Additional object types will be added in future releases.
|
||||
|
||||
### Candidate labels
|
||||
|
||||
Candidate labels are also available for annotation. These labels don't have enough data to be included in the model yet, but using them will help add support sooner. You can enable these labels by editing the camera settings.
|
||||
|
||||
Where possible, these labels are mapped to existing labels during training. For example, any `baby` labels are mapped to `person` until support for new labels is added.
|
||||
|
||||
The candidate labels are: `baby`, `bpost`, `badger`, `possum`, `rodent`, `chicken`, `groundhog`, `boar`, `hedgehog`, `tractor`, `golf cart`, `garbage truck`, `bus`, `sports ball`
|
||||
|
||||
Candidate labels are not available for automatic suggestions.
|
||||
|
||||
### Label attributes
|
||||
|
||||
Frigate has special handling for some labels when using Frigate+ models. `face`, `license_plate`, and delivery logos such as `amazon`, `ups`, and `fedex` are considered attribute labels which are not tracked like regular objects and do not generate review items directly. In addition, the `threshold` filter will have no effect on these labels. You should adjust the `min_score` and other filter values as needed.
|
||||
|
@ -40,6 +40,17 @@ Some users have reported that this older device runs an older kernel causing iss
|
||||
6. Open the control panel - info scree. The coral TPU will now be recognised as a USB Device - google inc
|
||||
7. Start the frigate container. Everything should work now!
|
||||
|
||||
### QNAP NAS
|
||||
|
||||
QNAP NAS devices, such as the TS-253A, may use connected Coral TPU devices if [QuMagie](https://www.qnap.com/en/software/qumagie) is installed along with its QNAP AI Core extension. If any of the features—`facial recognition`, `object recognition`, or `similar photo recognition`—are enabled, Container Station applications such as `Frigate` or `CodeProject.AI Server` will be unable to initialize the TPU device in use.
|
||||
To allow the Coral TPU device to be discovered, the you must either:
|
||||
|
||||
1. [Disable the AI recognition features in QuMagie](https://docs.qnap.com/application/qumagie/2.x/en-us/configuring-qnap-ai-core-settings-FB13CE03.html),
|
||||
2. Remove the QNAP AI Core extension or
|
||||
3. Manually start the QNAP AI Core extension after Frigate has fully started (not recommended).
|
||||
|
||||
It is also recommended to restart the NAS once the changes have been made.
|
||||
|
||||
## USB Coral Detection Appears to be Stuck
|
||||
|
||||
The USB Coral can become stuck and need to be restarted, this can happen for a number of reasons depending on hardware and software setup. Some common reasons are:
|
||||
|
@ -23,10 +23,22 @@ const config: Config = {
|
||||
mermaid: true,
|
||||
},
|
||||
themeConfig: {
|
||||
algolia: {
|
||||
appId: "WIURGBNBPY",
|
||||
apiKey: "d02cc0a6a61178b25da550212925226b",
|
||||
indexName: "frigate",
|
||||
announcementBar: {
|
||||
id: 'frigate_plus',
|
||||
content: `
|
||||
<span style="margin-right: 8px; display: inline-block; animation: pulse 2s infinite;">🚀</span>
|
||||
Get more relevant and accurate detections with Frigate+ models.
|
||||
<a style="margin-left: 12px; padding: 3px 10px; background: #94d2bd; color: #001219; text-decoration: none; border-radius: 4px; font-weight: 500; " target="_blank" rel="noopener noreferrer" href="https://frigate.video/plus/">Learn more</a>
|
||||
<span style="margin-left: 8px; display: inline-block; animation: pulse 2s infinite;">✨</span>
|
||||
<style>
|
||||
@keyframes pulse {
|
||||
0%, 100% { transform: scale(1); }
|
||||
50% { transform: scale(1.1); }
|
||||
}
|
||||
</style>`,
|
||||
backgroundColor: '#005f73',
|
||||
textColor: '#e0fbfc',
|
||||
isCloseable: false,
|
||||
},
|
||||
docs: {
|
||||
sidebar: {
|
||||
|
@ -7,6 +7,7 @@ const sidebars: SidebarsConfig = {
|
||||
Frigate: [
|
||||
'frigate/index',
|
||||
'frigate/hardware',
|
||||
'frigate/planning_setup',
|
||||
'frigate/installation',
|
||||
'frigate/updating',
|
||||
'frigate/camera_setup',
|
||||
@ -87,8 +88,8 @@ const sidebars: SidebarsConfig = {
|
||||
],
|
||||
'Frigate+': [
|
||||
'plus/index',
|
||||
'plus/annotating',
|
||||
'plus/first_model',
|
||||
'plus/improving_model',
|
||||
'plus/faq',
|
||||
],
|
||||
Troubleshooting: [
|
||||
|
BIN
docs/static/img/plus/suggestions.webp
vendored
Normal file
BIN
docs/static/img/plus/suggestions.webp
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 71 KiB |
@ -71,6 +71,7 @@ from frigate.timeline import TimelineProcessor
|
||||
from frigate.util.builtin import empty_and_close_queue
|
||||
from frigate.util.image import SharedMemoryFrameManager, UntrackedSharedMemory
|
||||
from frigate.util.object import get_camera_regions_grid
|
||||
from frigate.util.services import set_file_limit
|
||||
from frigate.version import VERSION
|
||||
from frigate.video import capture_camera, track_camera
|
||||
from frigate.watchdog import FrigateWatchdog
|
||||
@ -587,6 +588,9 @@ class FrigateApp:
|
||||
# Ensure global state.
|
||||
self.ensure_dirs()
|
||||
|
||||
# Set soft file limits.
|
||||
set_file_limit()
|
||||
|
||||
# Start frigate services.
|
||||
self.init_camera_metrics()
|
||||
self.init_queues()
|
||||
|
@ -5,6 +5,7 @@ import json
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import resource
|
||||
import signal
|
||||
import subprocess as sp
|
||||
import traceback
|
||||
@ -632,3 +633,19 @@ async def get_video_properties(
|
||||
result["fourcc"] = fourcc
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def set_file_limit() -> None:
|
||||
# Newer versions of containerd 2.X+ impose a very low soft file limit of 1024
|
||||
# This applies to OSs like HA OS (see https://github.com/home-assistant/operating-system/issues/4110)
|
||||
# Attempt to increase this limit
|
||||
soft_limit = int(os.getenv("SOFT_FILE_LIMIT", "65536") or "65536")
|
||||
|
||||
current_soft, current_hard = resource.getrlimit(resource.RLIMIT_NOFILE)
|
||||
logger.info(f"Current file limits - Soft: {current_soft}, Hard: {current_hard}")
|
||||
|
||||
new_soft = min(soft_limit, current_hard)
|
||||
resource.setrlimit(resource.RLIMIT_NOFILE, (new_soft, current_hard))
|
||||
logger.info(
|
||||
f"File limit set. New soft limit: {new_soft}, Hard limit remains: {current_hard}"
|
||||
)
|
||||
|
Loading…
Reference in New Issue
Block a user