diff --git a/docs/docs/configuration/authentication.md b/docs/docs/configuration/authentication.md index f83659c52..84204cd46 100644 --- a/docs/docs/configuration/authentication.md +++ b/docs/docs/configuration/authentication.md @@ -97,7 +97,7 @@ python3 -c 'import secrets; print(secrets.token_hex(64))' ### Header mapping -If you have disabled Frigate's authentication and your proxy supports passing a header with authenticated usernames and/or roles, you can use the `header_map` config to specify the header name so it is passed to Frigate. For example, the following will map the `X-Forwarded-User` and `X-Forwarded-Role` values. Header names are not case sensitive. +If you have disabled Frigate's authentication and your proxy supports passing a header with authenticated usernames and/or roles, you can use the `header_map` config to specify the header name so it is passed to Frigate. For example, the following will map the `X-Forwarded-User` and `X-Forwarded-Role` values. Header names are not case sensitive. Multiple values can be included in the role header, but they must be comma-separated. ```yaml proxy: diff --git a/docs/docs/configuration/face_recognition.md b/docs/docs/configuration/face_recognition.md index 7a4cde558..d3170ec75 100644 --- a/docs/docs/configuration/face_recognition.md +++ b/docs/docs/configuration/face_recognition.md @@ -105,19 +105,25 @@ When choosing images to include in the face training set it is recommended to al ::: +### Understanding the Train Tab + +The Train tab in the face library displays recent face recognition attempts. Detected face images are grouped according to the person they were identified as potentially matching. + +Each face image is labeled with a name (or `Unknown`) along with the confidence score of the recognition attempt. While each image can be used to train the system for a specific person, not all images are suitable for training. + +Refer to the guidelines below for best practices on selecting images for training. + ### Step 1 - Building a Strong Foundation When first enabling face recognition it is important to build a foundation of strong images. It is recommended to start by uploading 1-5 photos containing just this person's face. It is important that the person's face in the photo is front-facing and not turned, this will ensure a good starting point. -Then it is recommended to use the `Face Library` tab in Frigate to select and train images for each person as they are detected. When building a strong foundation it is strongly recommended to only train on images that are front-facing. Ignore images from cameras that recognize faces from an angle. +Then it is recommended to use the `Face Library` tab in Frigate to select and train images for each person as they are detected. When building a strong foundation it is strongly recommended to only train on images that are front-facing. Ignore images from cameras that recognize faces from an angle. Aim to strike a balance between the quality of images while also having a range of conditions (day / night, different weather conditions, different times of day, etc.) in order to have diversity in the images used for each person and not have over-fitting. -Aim to strike a balance between the quality of images while also having a range of conditions (day / night, different weather conditions, different times of day, etc.) in order to have diversity in the images used for each person and not have over-fitting. - -Once a person starts to be consistently recognized correctly on images that are front-facing, it is time to move on to the next step. +You do not want to train images that are 90%+ as these are already being confidently recognized. In this step the goal is to train on clear, lower scoring front-facing images until the majority of front-facing images for a given person are consistently recognized correctly. Then it is time to move on to step 2. ### Step 2 - Expanding The Dataset -Once front-facing images are performing well, start choosing slightly off-angle images to include for training. It is important to still choose images where enough face detail is visible to recognize someone. +Once front-facing images are performing well, start choosing slightly off-angle images to include for training. It is important to still choose images where enough face detail is visible to recognize someone, and you still only want to train on images that score lower. ## FAQ diff --git a/docs/docs/configuration/zones.md b/docs/docs/configuration/zones.md index a23a3a617..717dd9df7 100644 --- a/docs/docs/configuration/zones.md +++ b/docs/docs/configuration/zones.md @@ -165,7 +165,7 @@ These speed values are output as a number in miles per hour (mph) or kilometers #### Best practices and caveats -- Speed estimation works best with a straight road or path when your object travels in a straight line across that path. Avoid creating your zone near intersections or anywhere that objects would make a turn. If the bounding box changes shape (either because the object made a turn or became partially obscured, for example), speed estimation will not be accurate. +- Speed estimation works best with a straight road or path when your object travels in a straight line across that path. Avoid creating your zone near intersections or anywhere that objects would make a turn. A large zone can be used, but it's not required and may even cause issues - if the object's bounding box changes shape (such as when it turns or becomes partially hidden), the speed estimate will be inaccurate. Generally it's best to make your zone large enough to capture a few frames, but small enough so that the bounding box doesn't change size as it travels through the zone. - Create a zone where the bottom center of your object's bounding box travels directly through it and does not become obscured at any time. See the photo example above. - Depending on the size and location of your zone, you may want to decrease the zone's `inertia` value from the default of 3. - The more accurate your real-world dimensions can be measured, the more accurate speed estimation will be. However, due to the way Frigate's tracking algorithm works, you may need to tweak the real-world distance values so that estimated speeds better match real-world speeds. diff --git a/frigate/api/media.py b/frigate/api/media.py index 9c56e363d..717441f36 100644 --- a/frigate/api/media.py +++ b/frigate/api/media.py @@ -593,10 +593,12 @@ def recording_clip( clip: Recordings for clip in recordings: file.write(f"file '{clip.path}'\n") + # if this is the starting clip, add an inpoint if clip.start_time < start_ts: file.write(f"inpoint {int(start_ts - clip.start_time)}\n") - # if this is the ending clip, add an outpoint + + # if this is the ending clip and end trim is enabled, add an outpoint if clip.end_time > end_ts: file.write(f"outpoint {int(end_ts - clip.start_time)}\n") @@ -641,7 +643,12 @@ def recording_clip( @router.get("/vod/{camera_name}/start/{start_ts}/end/{end_ts}") def vod_ts(camera_name: str, start_ts: float, end_ts: float): recordings = ( - Recordings.select(Recordings.path, Recordings.duration, Recordings.end_time) + Recordings.select( + Recordings.path, + Recordings.duration, + Recordings.end_time, + Recordings.start_time, + ) .where( Recordings.start_time.between(start_ts, end_ts) | Recordings.end_time.between(start_ts, end_ts) @@ -661,14 +668,19 @@ def vod_ts(camera_name: str, start_ts: float, end_ts: float): clip = {"type": "source", "path": recording.path} duration = int(recording.duration * 1000) - # Determine if we need to end the last clip early + # adjust start offset if start_ts is after recording.start_time + if start_ts > recording.start_time: + inpoint = int((start_ts - recording.start_time) * 1000) + clip["clipFrom"] = inpoint + duration -= inpoint + + # adjust end if recording.end_time is after end_ts if recording.end_time > end_ts: duration -= int((recording.end_time - end_ts) * 1000) - if duration == 0: - # this means the segment starts right at the end of the requested time range - # and it does not need to be included - continue + if duration <= 0: + # skip if the clip has no valid duration + continue if 0 < duration < max_duration_ms: clip["keyFrameDurations"] = [duration] diff --git a/frigate/camera/state.py b/frigate/camera/state.py index f7b60ed68..e5a9ada9d 100644 --- a/frigate/camera/state.py +++ b/frigate/camera/state.py @@ -282,9 +282,13 @@ class CameraState: } new_obj.thumbnail_data = thumbnail_data tracked_objects[id].thumbnail_data = thumbnail_data - self.best_objects[new_obj.obj_data["label"]] = new_obj + object_type = new_obj.obj_data["label"] + self.best_objects[object_type] = new_obj # call event handlers + for c in self.callbacks["snapshot"]: + c(self.name, self.best_objects[object_type], frame_name) + for c in self.callbacks["start"]: c(self.name, new_obj, frame_name) diff --git a/frigate/config/classification.py b/frigate/config/classification.py index 7f4f39bbd..6733ade86 100644 --- a/frigate/config/classification.py +++ b/frigate/config/classification.py @@ -78,7 +78,7 @@ class FaceRecognitionConfig(FrigateBaseModel): le=1.0, ) min_area: int = Field( - default=500, title="Min area of face box to consider running face recognition." + default=750, title="Min area of face box to consider running face recognition." ) save_attempts: int = Field( default=100, ge=0, title="Number of face attempts to save in the train tab." @@ -91,7 +91,7 @@ class FaceRecognitionConfig(FrigateBaseModel): class CameraFaceRecognitionConfig(FrigateBaseModel): enabled: bool = Field(default=False, title="Enable face recognition.") min_area: int = Field( - default=500, title="Min area of face box to consider running face recognition." + default=750, title="Min area of face box to consider running face recognition." ) model_config = ConfigDict(extra="forbid", protected_namespaces=()) diff --git a/frigate/ptz/autotrack.py b/frigate/ptz/autotrack.py index 92bfe0ec0..f9fb70d4f 100644 --- a/frigate/ptz/autotrack.py +++ b/frigate/ptz/autotrack.py @@ -1171,7 +1171,20 @@ class PtzAutoTracker: zoom_predicted_movement_time = 0 if np.any(average_velocity): - zoom_predicted_movement_time = abs(zoom) * self.zoom_time[camera] + # Calculate the intended change in zoom level + zoom_change = (1 - abs(zoom)) * (1 if zoom >= 0 else -1) + + # Calculate new zoom level and clamp to [0, 1] + new_zoom = max( + 0, min(1, self.ptz_metrics[camera].zoom_level.value + zoom_change) + ) + + # Calculate the actual zoom distance + zoom_distance = abs( + new_zoom - self.ptz_metrics[camera].zoom_level.value + ) + + zoom_predicted_movement_time = zoom_distance * self.zoom_time[camera] zoom_predicted_box = ( predicted_box @@ -1188,7 +1201,7 @@ class PtzAutoTracker: tilt = (0.5 - (centroid_y / camera_height)) * 2 logger.debug( - f"{camera}: Zoom amount: {zoom}, zoom predicted time: {zoom_predicted_movement_time}, zoom predicted box: {tuple(zoom_predicted_box)}" + f"{camera}: Zoom amount: {zoom}, zoom distance: {zoom_distance}, zoom predicted time: {zoom_predicted_movement_time}, zoom predicted box: {tuple(zoom_predicted_box)}" ) self._enqueue_move(camera, obj.obj_data["frame_time"], pan, tilt, zoom) diff --git a/web/public/locales/en/components/filter.json b/web/public/locales/en/components/filter.json index 7ec5c752e..08a0ee2b2 100644 --- a/web/public/locales/en/components/filter.json +++ b/web/public/locales/en/components/filter.json @@ -17,6 +17,7 @@ } }, "dates": { + "selectPreset": "Select a Preset…", "all": { "title": "All Dates", "short": "Dates" diff --git a/web/public/locales/en/views/explore.json b/web/public/locales/en/views/explore.json index e8a5153ee..13cc494ea 100644 --- a/web/public/locales/en/views/explore.json +++ b/web/public/locales/en/views/explore.json @@ -43,6 +43,8 @@ "adjustAnnotationSettings": "Adjust annotation settings", "scrollViewTips": "Scroll to view the significant moments of this object's lifecycle.", "autoTrackingTips": "Bounding box positions will be inaccurate for autotracking cameras.", + "count": "{{first}} of {{second}}", + "trackedPoint": "Tracked Point", "lifecycleItemDesc": { "visible": "{{label}} detected", "entered_zone": "{{label}} entered {{zones}}", diff --git a/web/src/components/overlay/detail/ObjectLifecycle.tsx b/web/src/components/overlay/detail/ObjectLifecycle.tsx index b7c43b2a3..3fc702854 100644 --- a/web/src/components/overlay/detail/ObjectLifecycle.tsx +++ b/web/src/components/overlay/detail/ObjectLifecycle.tsx @@ -525,7 +525,10 @@ export default function ObjectLifecycle({ {t("objectLifecycle.scrollViewTips")}