Cameras
#
Setting Up Camera InputsUp to 4 inputs can be configured for each camera and the role of each input can be mixed and matched based on your needs. This allows you to use a lower resolution stream for object detection, but create clips from a higher resolution stream, or vice versa.
Each role can only be assigned to one input per camera. The options for roles are as follows:
Role | Description |
---|---|
detect | Main feed for object detection |
clips | Clips of events from objects detected in the detect feed. docs |
record | Saves 60 second segments of the video feed. docs |
rtmp | Broadcast as an RTMP feed for other services to consume. docs |
#
Examplewidth
, height
, and fps
are only used for the detect
role. Other streams are passed through, so there is no need to specify the resolution.
#
Masks & Zones#
MasksMasks are used to ignore initial detection in areas of your camera's field of view.
There are two types of masks available:
- Motion masks: Motion masks are used to prevent unwanted types of motion from triggering detection. Try watching the video feed with
Motion Boxes
enabled to see what may be regularly detected as motion. For example, you want to mask out your timestamp, the sky, rooftops, etc. Keep in mind that this mask only prevents motion from being detected and does not prevent objects from being detected if object detection was started due to motion in unmasked areas. Motion is also used during object tracking to refine the object detection area in the next frame. Over masking will make it more difficult for objects to be tracked. To see this effect, create a mask, and then watch the video feed withMotion Boxes
enabled again. - Object filter masks: Object filter masks are used to filter out false positives for a given object type. These should be used to filter any areas where it is not possible for an object of that type to be. The bottom center of the detected object's bounding box is evaluated against the mask. If it is in a masked area, it is assumed to be a false positive. For example, you may want to mask out rooftops, walls, the sky, treetops for people. For cars, masking locations other than the street or your driveway will tell frigate that anything in your yard is a false positive.
To create a poly mask:
- Visit the web UI
- Click the camera you wish to create a mask for
- Click "Mask & Zone creator"
- Click "Add" on the type of mask or zone you would like to create
- Click on the camera's latest image to create a masked area. The yaml representation will be updated in real-time
- When you've finished creating your mask, click "Copy" and paste the contents into your
config.yaml
file and restart Frigate
Example of a finished row corresponding to the below example image:
#
ZonesZones allow you to define a specific area of the frame and apply additional filters for object types so you can determine whether or not an object is within a particular area. Zones cannot have the same name as a camera. If desired, a single zone can include multiple cameras if you have multiple cameras covering the same area by configuring zones with the same name for each camera.
During testing, draw_zones
should be set in the config to draw the zone on the frames so you can adjust as needed. The zone line will increase in thickness when any object enters the zone.
To create a zone, follow the same steps above for a "Motion mask", but use the section of the web UI for creating a zone instead.
#
ObjectsFor a list of available objects, see the objects documentation.
#
ClipsFrigate can save video clips without any CPU overhead for encoding by simply copying the stream directly with FFmpeg. It leverages FFmpeg's segment functionality to maintain a cache of video for each camera. The cache files are written to disk at /tmp/cache
and do not introduce memory overhead. When an object is being tracked, it will extend the cache to ensure it can assemble a clip when the event ends. Once the event ends, it again uses FFmpeg to assemble a clip by combining the video clips without any encoding by the CPU. Assembled clips are are saved to /media/frigate/clips
. Clips are retained according to the retention settings defined on the config for each object type.
These clips will not be playable in the web UI or in Home Assistant's media browser unless your camera sends video as h264.
caution
Previous versions of frigate included -vsync drop
in input parameters. This is not compatible with FFmpeg's segment feature and must be removed from your input parameters if you have overrides set.
#
SnapshotsFrigate can save a snapshot image to /media/frigate/clips
for each event named as <camera>-<id>.jpg
.
#
24/7 Recordings24/7 recordings can be enabled and are stored at /media/frigate/recordings
. The folder structure for the recordings is YYYY-MM/DD/HH/<camera_name>/MM.SS.mp4
. These recordings are written directly from your camera stream without re-encoding and are available in Home Assistant's media browser. Each camera supports a configurable retention policy in the config.
caution
Previous versions of frigate included -vsync drop
in input parameters. This is not compatible with FFmpeg's segment feature and must be removed from your input parameters if you have overrides set.
#
RTMP streamsFrigate can re-stream your video feed as a RTMP feed for other applications such as Home Assistant to utilize it at rtmp://<frigate_host>/live/<camera_name>
. Port 1935 must be open. This allows you to use a video feed for detection in frigate and Home Assistant live view at the same time without having to make two separate connections to the camera. The video feed is copied from the original video feed directly to avoid re-encoding. This feed does not include any annotation by Frigate.
Some video feeds are not compatible with RTMP. If you are experiencing issues, check to make sure your camera feed is h264 with AAC audio. If your camera doesn't support a compatible format for RTMP, you can use the ffmpeg args to re-encode it on the fly at the expense of increased CPU utilization.
#
Full exampleThe following is a full example of all of the options together for a camera configuration
#
Camera specific configuration#
MJPEG CamerasThe input and output parameters need to be adjusted for MJPEG cameras
Note that mjpeg cameras require encoding the video into h264 for clips, recording, and rtmp roles. This will use significantly more CPU than if the cameras supported h264 feeds directly.
#
RTMP CamerasThe input parameters need to be adjusted for RTMP cameras
#
Reolink 410/520 (possibly others)Several users have reported success with the rtmp video from Reolink cameras.
#
Blue Iris RTSP CamerasYou will need to remove nobuffer
flag for Blue Iris RTSP cameras