mirror of
https://github.com/blakeblackshear/frigate.git
synced 2024-11-21 19:07:46 +01:00
Autotracking improvements and bugfixes (#7984)
* add zoom factor and catch motion exception * reword error message * check euclidean distance of estimate points * use numpy for euclidean distance * config entry * use zoom factor and zoom based on velocity * move debug inside try * change log type to info * logger level warning * docs * exception handling
This commit is contained in:
parent
0048cd5edc
commit
ead03c381b
@ -64,6 +64,11 @@ cameras:
|
||||
# absolute - use absolute zooming (supported by most PTZ capable cameras)
|
||||
# relative - use relative zooming (not supported on all PTZs, but makes concurrent pan/tilt/zoom movements)
|
||||
zooming: disabled
|
||||
# Optional: A value to change the behavior of zooming on autotracked objects. (default: shown below)
|
||||
# A lower value will keep more of the scene in view around a tracked object.
|
||||
# A higher value will zoom in more on a tracked object, but Frigate may lose tracking more quickly.
|
||||
# The value should be between 0.1 and 0.75
|
||||
zoom_factor: 0.3
|
||||
# Optional: list of objects to track from labelmap.txt (default: shown below)
|
||||
track:
|
||||
- person
|
||||
@ -104,12 +109,16 @@ A full-frame zone in `required_zones` is not recommended, especially if you've c
|
||||
|
||||
## Zooming
|
||||
|
||||
Zooming is an experimental feature and may use significantly more CPU when tracking objects than panning/tilting only. It may be helpful to tweak your camera's autofocus settings if you are noticing focus problems when using zooming.
|
||||
Zooming is still a very experimental feature and may use significantly more CPU when tracking objects than panning/tilting only. It may be helpful to tweak your camera's autofocus settings if you are noticing focus problems when using zooming.
|
||||
|
||||
Absolute zooming makes zoom movements separate from pan/tilt movements. Most PTZ cameras will support absolute zooming.
|
||||
|
||||
Relative zooming attempts to make a zoom movement concurrently with any pan/tilt movements. It was tested to work with some Dahua and Amcrest PTZs. But the ONVIF specification indicates that there no assumption about how the generic zoom range is mapped to magnification, field of view or other physical zoom dimension when using relative zooming. So if relative zooming behavior is erratic or just doesn't work, use absolute zooming.
|
||||
|
||||
You can optionally adjust the `zoom_factor` for your camera in your configuration file. Lower values will leave more space from the scene around the tracked object while higher values will cause your camera to zoom in more on the object. However, keep in mind that Frigate needs a fair amount of pixels and scene details outside of the bounding box of the tracked object to estimate the motion of your camera. If the object is taking up too much of the frame, Frigate will not be able to track the motion of the camera and your object will be lost.
|
||||
|
||||
The range of this option is from 0.1 to 0.75. The default value of 0.3 should be sufficient for most users. If you have a powerful zoom lens on your PTZ or you find your autotracked objects are often lost, you may want to lower this value. Because every PTZ and scene is different, you should experiment to determine what works best for you.
|
||||
|
||||
## Usage applications
|
||||
|
||||
In security and surveillance, it's common to use "spotter" cameras in combination with your PTZ. When your fixed spotter camera detects an object, you could use an automation platform like Home Assistant to move the PTZ to a specific preset so that Frigate can begin automatically tracking the object. For example: a residence may have fixed cameras on the east and west side of the property, capturing views up and down a street. When the spotter camera on the west side detects a person, a Home Assistant automation could move the PTZ to a camera preset aimed toward the west. When the object enters the specified zone, Frigate's autotracker could then continue to track the person as it moves out of view of any of the fixed cameras.
|
||||
|
@ -596,6 +596,11 @@ cameras:
|
||||
# absolute - use absolute zooming (supported by most PTZ capable cameras)
|
||||
# relative - use relative zooming (not supported on all PTZs, but makes concurrent pan/tilt/zoom movements)
|
||||
zooming: disabled
|
||||
# Optional: A value to change the behavior of zooming on autotracked objects. (default: shown below)
|
||||
# A lower value will keep more of the scene in view around a tracked object.
|
||||
# A higher value will zoom in more on a tracked object, but Frigate may lose tracking more quickly.
|
||||
# The value should be between 0.1 and 0.75
|
||||
zoom_factor: 0.3
|
||||
# Optional: list of objects to track from labelmap.txt (default: shown below)
|
||||
track:
|
||||
- person
|
||||
|
@ -152,6 +152,12 @@ class PtzAutotrackConfig(FrigateBaseModel):
|
||||
zooming: ZoomingModeEnum = Field(
|
||||
default=ZoomingModeEnum.disabled, title="Autotracker zooming mode."
|
||||
)
|
||||
zoom_factor: float = Field(
|
||||
default=0.3,
|
||||
title="Zooming factor (0.1-0.75).",
|
||||
ge=0.1,
|
||||
le=0.75,
|
||||
)
|
||||
track: List[str] = Field(default=DEFAULT_TRACKED_OBJECTS, title="Objects to track.")
|
||||
required_zones: List[str] = Field(
|
||||
default_factory=list,
|
||||
|
@ -2,7 +2,6 @@
|
||||
|
||||
import copy
|
||||
import logging
|
||||
import math
|
||||
import os
|
||||
import queue
|
||||
import threading
|
||||
@ -105,16 +104,22 @@ class PtzMotionEstimator:
|
||||
# Norfair estimator function needs color so it can convert it right back to gray
|
||||
frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGRA)
|
||||
|
||||
self.coord_transformations = self.norfair_motion_estimator.update(
|
||||
frame, mask
|
||||
)
|
||||
try:
|
||||
self.coord_transformations = self.norfair_motion_estimator.update(
|
||||
frame, mask
|
||||
)
|
||||
logger.debug(
|
||||
f"Motion estimator transformation: {self.coord_transformations.rel_to_abs([[0,0]])}"
|
||||
)
|
||||
except Exception:
|
||||
# sometimes opencv can't find enough features in the image to find homography, so catch this error
|
||||
logger.warning(
|
||||
f"Autotracker: motion estimator couldn't get transformations for {camera_name} at frame time {frame_time}"
|
||||
)
|
||||
self.coord_transformations = None
|
||||
|
||||
self.frame_manager.close(frame_id)
|
||||
|
||||
logger.debug(
|
||||
f"Motion estimator transformation: {self.coord_transformations.rel_to_abs([[0,0]])}"
|
||||
)
|
||||
|
||||
return self.coord_transformations
|
||||
|
||||
|
||||
@ -172,6 +177,7 @@ class PtzAutoTracker:
|
||||
self.calibrating: dict[str, object] = {}
|
||||
self.intercept: dict[str, object] = {}
|
||||
self.move_coefficients: dict[str, object] = {}
|
||||
self.zoom_factor: dict[str, object] = {}
|
||||
|
||||
# if cam is set to autotrack, onvif should be set up
|
||||
for camera_name, cam in self.config.cameras.items():
|
||||
@ -187,6 +193,7 @@ class PtzAutoTracker:
|
||||
|
||||
self.object_types[camera_name] = cam.onvif.autotracking.track
|
||||
self.required_zones[camera_name] = cam.onvif.autotracking.required_zones
|
||||
self.zoom_factor[camera_name] = cam.onvif.autotracking.zoom_factor
|
||||
|
||||
self.tracked_object[camera_name] = None
|
||||
self.tracked_object_previous[camera_name] = None
|
||||
@ -470,7 +477,7 @@ class PtzAutoTracker:
|
||||
tilt = tilt_excess
|
||||
zoom = zoom_excess
|
||||
|
||||
def _should_zoom_in(self, camera, box, area):
|
||||
def _should_zoom_in(self, camera, box, area, average_velocity):
|
||||
camera_config = self.config.cameras[camera]
|
||||
camera_width = camera_config.frame_shape[1]
|
||||
camera_height = camera_config.frame_shape[0]
|
||||
@ -485,7 +492,15 @@ class PtzAutoTracker:
|
||||
#
|
||||
# TODO: Take into account the area changing when an object is moving out of frame
|
||||
edge_threshold = 0.15
|
||||
area_threshold = 0.7
|
||||
area_threshold = self.zoom_factor[camera]
|
||||
velocity_threshold = 0.1
|
||||
|
||||
# if we have a fast moving object, let's zoom out
|
||||
# fast moving is defined as a velocity of more than 10% of the camera's width or height
|
||||
# so an object with an x velocity of 15 pixels on a 1280x720 camera would trigger a zoom out
|
||||
velocity_threshold = average_velocity[0] > (
|
||||
camera_width * velocity_threshold
|
||||
) or average_velocity[1] > (camera_height * velocity_threshold)
|
||||
|
||||
# returns True to zoom in, False to zoom out
|
||||
return (
|
||||
@ -494,10 +509,12 @@ class PtzAutoTracker:
|
||||
and bb_top > edge_threshold * camera_height
|
||||
and bb_bottom < (1 - edge_threshold) * camera_height
|
||||
and area < area_threshold * camera_area
|
||||
and not velocity_threshold
|
||||
)
|
||||
|
||||
def _autotrack_move_ptz(self, camera, obj):
|
||||
camera_config = self.config.cameras[camera]
|
||||
average_velocity = (0,) * 4
|
||||
|
||||
# # frame width and height
|
||||
camera_width = camera_config.frame_shape[1]
|
||||
@ -525,12 +542,19 @@ class PtzAutoTracker:
|
||||
(y1 + y2) / 2,
|
||||
)
|
||||
|
||||
# this box could exceed the frame boundaries if velocity is high
|
||||
# but we'll handle that in _enqueue_move() as two separate moves
|
||||
predicted_box = [
|
||||
round(x + camera_fps * predicted_movement_time * v)
|
||||
for x, v in zip(obj.obj_data["box"], average_velocity)
|
||||
]
|
||||
# get euclidean distance of the two points, sometimes the estimate is way off
|
||||
distance = np.linalg.norm([x2 - x1, y2 - y1])
|
||||
|
||||
if distance <= 5:
|
||||
# this box could exceed the frame boundaries if velocity is high
|
||||
# but we'll handle that in _enqueue_move() as two separate moves
|
||||
predicted_box = [
|
||||
round(x + camera_fps * predicted_movement_time * v)
|
||||
for x, v in zip(obj.obj_data["box"], average_velocity)
|
||||
]
|
||||
else:
|
||||
# estimate was bad
|
||||
predicted_box = obj.obj_data["box"]
|
||||
|
||||
centroid_x = round((predicted_box[0] + predicted_box[2]) / 2)
|
||||
centroid_y = round((predicted_box[1] + predicted_box[3]) / 2)
|
||||
@ -545,7 +569,15 @@ class PtzAutoTracker:
|
||||
|
||||
if camera_config.onvif.autotracking.zooming == ZoomingModeEnum.relative:
|
||||
# relative zooming concurrently with pan/tilt
|
||||
zoom = obj.obj_data["area"] / (camera_width * camera_height)
|
||||
zoom = min(
|
||||
obj.obj_data["area"]
|
||||
/ (camera_width * camera_height)
|
||||
* 100
|
||||
* self.zoom_factor[camera],
|
||||
1,
|
||||
)
|
||||
|
||||
logger.debug(f"Zoom value: {zoom}")
|
||||
|
||||
# test if we need to zoom out
|
||||
if not self._should_zoom_in(
|
||||
@ -554,15 +586,18 @@ class PtzAutoTracker:
|
||||
if camera_config.onvif.autotracking.movement_weights
|
||||
else obj.obj_data["box"],
|
||||
obj.obj_data["area"],
|
||||
average_velocity,
|
||||
):
|
||||
zoom = -(1 - zoom)
|
||||
|
||||
# don't make small movements if area hasn't changed significantly
|
||||
# don't make small movements to zoom in if area hasn't changed significantly
|
||||
# but always zoom out if necessary
|
||||
if (
|
||||
"area" in obj.previous
|
||||
and abs(obj.obj_data["area"] - obj.previous["area"])
|
||||
/ obj.obj_data["area"]
|
||||
< 0.1
|
||||
< 0.2
|
||||
and zoom > 0
|
||||
):
|
||||
zoom = 0
|
||||
else:
|
||||
@ -579,7 +614,7 @@ class PtzAutoTracker:
|
||||
|
||||
if 0 < zoom_level <= 1:
|
||||
if self._should_zoom_in(
|
||||
camera, obj.obj_data["box"], obj.obj_data["area"]
|
||||
camera, obj.obj_data["box"], obj.obj_data["area"], (0, 0, 0, 0)
|
||||
):
|
||||
zoom = min(1.0, zoom_level + 0.1)
|
||||
else:
|
||||
@ -637,10 +672,11 @@ class PtzAutoTracker:
|
||||
# more often to keep the object in the center. Raising the percentage will cause less
|
||||
# movement and will be more flexible with objects not quite being centered.
|
||||
# TODO: there's probably a better way to approach this
|
||||
distance = math.sqrt(
|
||||
(obj.obj_data["centroid"][0] - camera_config.detect.width / 2) ** 2
|
||||
+ (obj.obj_data["centroid"][1] - camera_config.detect.height / 2)
|
||||
** 2
|
||||
distance = np.linalg.norm(
|
||||
[
|
||||
obj.obj_data["centroid"][0] - camera_config.detect.width / 2,
|
||||
obj.obj_data["centroid"][1] - camera_config.detect.height / 2,
|
||||
]
|
||||
)
|
||||
|
||||
obj_width = obj.obj_data["box"][2] - obj.obj_data["box"][0]
|
||||
|
Loading…
Reference in New Issue
Block a user