Cameras
#
Setting Up Camera InputsUp to 4 inputs can be configured for each camera and the role of each input can be mixed and matched based on your needs. This allows you to use a lower resolution stream for object detection, but create clips from a higher resolution stream, or vice versa.
Each role can only be assigned to one input per camera. The options for roles are as follows:
Role | Description |
---|---|
detect | Main feed for object detection |
clips | Clips of events from objects detected in the detect feed. docs |
record | Saves 60 second segments of the video feed. docs |
rtmp | Broadcast as an RTMP feed for other services to consume. docs |
#
Example#
Masks & Zones#
Motion masksMasks are used to ignore initial detection in areas of your camera's field of view.
To create a poly mask:
- Visit the web UI
- Click the camera you wish to create a mask for
- Click "Mask & Zone creator"
- Click "Add" on the type of mask or zone you would like to create
- Click on the camera's latest image to create a masked area. The yaml representation will be updated in real-time
- When you've finished creating your mask, click "Copy" and paste the contents into your
config.yaml
file and restart Frigate
Example of a finished row corresponding to the below example image:
#
ZonesZones allow you to define a specific area of the frame and apply additional filters for object types so you can determine whether or not an object is within a particular area. Zones cannot have the same name as a camera. If desired, a single zone can include multiple cameras if you have multiple cameras covering the same area by configuring zones with the same name for each camera.
During testing, draw_zones
should be set in the config to draw the zone on the frames so you can adjust as needed. The zone line will increase in thickness when any object enters the zone.
To create a zone, follow the same steps above for a "Motion mask", but use the section of the web UI for creating a zone instead.
#
Objects#
ClipsFrigate can save video clips without any CPU overhead for encoding by simply copying the stream directly with FFmpeg. It leverages FFmpeg's segment functionality to maintain a cache of video for each camera. The cache files are written to disk at /tmp/cache
and do not introduce memory overhead. When an object is being tracked, it will extend the cache to ensure it can assemble a clip when the event ends. Once the event ends, it again uses FFmpeg to assemble a clip by combining the video clips without any encoding by the CPU. Assembled clips are are saved to /media/frigate/clips
. Clips are retained according to the retention settings defined on the config for each object type.
caution
Previous versions of frigate included -vsync drop
in input parameters. This is not compatible with FFmpeg's segment feature and must be removed from your input parameters if you have overrides set.
#
SnapshotsFrigate can save a snapshot image to /media/frigate/clips
for each event named as <camera>-<id>.jpg
.
#
24/7 Recordings24/7 recordings can be enabled and are stored at /media/frigate/recordings
. The folder structure for the recordings is YYYY-MM/DD/HH/<camera_name>/MM.SS.mp4
. These recordings are written directly from your camera stream without re-encoding and are available in HomeAssistant's media browser. Each camera supports a configurable retention policy in the config.
caution
Previous versions of frigate included -vsync drop
in input parameters. This is not compatible with FFmpeg's segment feature and must be removed from your input parameters if you have overrides set.
#
RTMP streamsFrigate can re-stream your video feed as a RTMP feed for other applications such as HomeAssistant to utilize it at rtmp://<frigate_host>/live/<camera_name>
. Port 1935 must be open. This allows you to use a video feed for detection in frigate and HomeAssistant live view at the same time without having to make two separate connections to the camera. The video feed is copied from the original video feed directly to avoid re-encoding. This feed does not include any annotation by Frigate.
Some video feeds are not compatible with RTMP. If you are experiencing issues, check to make sure your camera feed is h264 with AAC audio. If your camera doesn't support a compatible format for RTMP, you can use the ffmpeg args to re-encode it on the fly at the expense of increased CPU utilization.
#
Full exampleThe following is a full example of all of the options together for a camera configuration
#
Camera specific configuration#
RTMP CamerasThe input parameters need to be adjusted for RTMP cameras
#
Blue Iris RTSP CamerasYou will need to remove nobuffer
flag for Blue Iris RTSP cameras