* face library i18n fixes

* face library i18n fixes

* add ability to use ctrl/cmd S to save in the config editor

* Use datetime as ID

* Update metrics inference speed to start with 0 ms

* fix android formatted thumbnail

* ensure role is comma separated and stripped correctly

* improve face library deletion

- add a confirmation dialog
- add ability to select all / delete faces in collections

* Implement lazy loading for video previews

* Force GPU for large embedding model

* GPU is required

* settings i18n fixes

* Don't delete train tab

* webpush debugging logs

* Fix incorrectly copying zones

* copy path data

* Ensure that cache dir exists for Frigate+

* face docs update

* Add description to upload image step to clarify the image

* Clean up

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
This commit is contained in:
Josh Hawkins 2025-05-09 08:36:44 -05:00 committed by GitHub
parent 52d94231c7
commit 8094dd4075
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
27 changed files with 402 additions and 195 deletions

View File

@ -34,7 +34,7 @@ All of these features run locally on your system.
The `small` model is optimized for efficiency and runs on the CPU, most CPUs should run the model efficiently. The `small` model is optimized for efficiency and runs on the CPU, most CPUs should run the model efficiently.
The `large` model is optimized for accuracy, an integrated or discrete GPU is highly recommended. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation. The `large` model is optimized for accuracy, an integrated or discrete GPU is required. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation.
## Configuration ## Configuration
@ -107,17 +107,17 @@ When choosing images to include in the face training set it is recommended to al
### Step 1 - Building a Strong Foundation ### Step 1 - Building a Strong Foundation
When first enabling face recognition it is important to build a foundation of strong images. It is recommended to start by uploading 1-5 "portrait" photos for each person. It is important that the person's face in the photo is straight-on and not turned which will ensure a good starting point. When first enabling face recognition it is important to build a foundation of strong images. It is recommended to start by uploading 1-5 photos containing just this person's face. It is important that the person's face in the photo is front-facing and not turned, this will ensure a good starting point.
Then it is recommended to use the `Face Library` tab in Frigate to select and train images for each person as they are detected. When building a strong foundation it is strongly recommended to only train on images that are straight-on. Ignore images from cameras that recognize faces from an angle. Then it is recommended to use the `Face Library` tab in Frigate to select and train images for each person as they are detected. When building a strong foundation it is strongly recommended to only train on images that are front-facing. Ignore images from cameras that recognize faces from an angle.
Aim to strike a balance between the quality of images while also having a range of conditions (day / night, different weather conditions, different times of day, etc.) in order to have diversity in the images used for each person and not have over-fitting. Aim to strike a balance between the quality of images while also having a range of conditions (day / night, different weather conditions, different times of day, etc.) in order to have diversity in the images used for each person and not have over-fitting.
Once a person starts to be consistently recognized correctly on images that are straight-on, it is time to move on to the next step. Once a person starts to be consistently recognized correctly on images that are front-facing, it is time to move on to the next step.
### Step 2 - Expanding The Dataset ### Step 2 - Expanding The Dataset
Once straight-on images are performing well, start choosing slightly off-angle images to include for training. It is important to still choose images where enough face detail is visible to recognize someone. Once front-facing images are performing well, start choosing slightly off-angle images to include for training. It is important to still choose images where enough face detail is visible to recognize someone.
## FAQ ## FAQ
@ -156,3 +156,7 @@ Face recognition does not run on the recording stream, this would be suboptimal
### I get an unknown error when taking a photo directly with my iPhone ### I get an unknown error when taking a photo directly with my iPhone
By default iOS devices will use HEIC (High Efficiency Image Container) for images, but this format is not supported for uploads. Choosing `large` as the format instead of `original` will use JPG which will work correctly. By default iOS devices will use HEIC (High Efficiency Image Container) for images, but this format is not supported for uploads. Choosing `large` as the format instead of `original` will use JPG which will work correctly.
## How can I delete the face database and start over?
Frigate does not store anything in its database related to face recognition. You can simply delete all of your faces through the Frigate UI or remove the contents of the `/media/frigate/clips/faces` directory.

View File

@ -268,7 +268,9 @@ def auth(request: Request):
# if comma-separated with "admin", use "admin", else use default role # if comma-separated with "admin", use "admin", else use default role
success_response.headers["remote-role"] = ( success_response.headers["remote-role"] = (
"admin" if role and "admin" in role else proxy_config.default_role "admin"
if role and "admin" in [r.strip() for r in role.split(",")]
else proxy_config.default_role
) )
return success_response return success_response

View File

@ -1,10 +1,9 @@
"""Object classification APIs.""" """Object classification APIs."""
import datetime
import logging import logging
import os import os
import random
import shutil import shutil
import string
import cv2 import cv2
from fastapi import APIRouter, Depends, Request, UploadFile from fastapi import APIRouter, Depends, Request, UploadFile
@ -120,8 +119,7 @@ def train_face(request: Request, name: str, body: dict = None):
) )
sanitized_name = sanitize_filename(name) sanitized_name = sanitize_filename(name)
rand_id = "".join(random.choices(string.ascii_lowercase + string.digits, k=6)) new_name = f"{sanitized_name}-{datetime.datetime.now().timestamp()}.webp"
new_name = f"{sanitized_name}-{rand_id}.webp"
new_file_folder = os.path.join(FACE_DIR, f"{sanitized_name}") new_file_folder = os.path.join(FACE_DIR, f"{sanitized_name}")
if not os.path.exists(new_file_folder): if not os.path.exists(new_file_folder):

View File

@ -909,7 +909,7 @@ def event_thumbnail(
elif extension == "webp": elif extension == "webp":
quality_params = [int(cv2.IMWRITE_WEBP_QUALITY), 60] quality_params = [int(cv2.IMWRITE_WEBP_QUALITY), 60]
_, img = cv2.imencode(f".{img}", thumbnail, quality_params) _, img = cv2.imencode(f".{extension}", thumbnail, quality_params)
thumbnail_bytes = img.tobytes() thumbnail_bytes = img.tobytes()
return Response( return Response(

View File

@ -303,6 +303,9 @@ class WebPushClient(Communicator): # type: ignore[misc]
and len(payload["before"]["data"]["zones"]) and len(payload["before"]["data"]["zones"])
== len(payload["after"]["data"]["zones"]) == len(payload["after"]["data"]["zones"])
): ):
logger.debug(
f"Skipping notification for {camera} - message is an update and important fields don't have an update"
)
return return
self.last_camera_notification_time[camera] = current_time self.last_camera_notification_time[camera] = current_time
@ -325,6 +328,8 @@ class WebPushClient(Communicator): # type: ignore[misc]
direct_url = f"/review?id={reviewId}" if state == "end" else f"/#{camera}" direct_url = f"/review?id={reviewId}" if state == "end" else f"/#{camera}"
ttl = 3600 if state == "end" else 0 ttl = 3600 if state == "end" else 0
logger.debug(f"Sending push notification for {camera}, review ID {reviewId}")
for user in self.web_pushers: for user in self.web_pushers:
self.send_push_notification( self.send_push_notification(
user=user, user=user,

View File

@ -25,7 +25,7 @@ from frigate.comms.event_metadata_updater import (
from frigate.const import CLIPS_DIR from frigate.const import CLIPS_DIR
from frigate.embeddings.onnx.lpr_embedding import LPR_EMBEDDING_SIZE from frigate.embeddings.onnx.lpr_embedding import LPR_EMBEDDING_SIZE
from frigate.types import TrackedObjectUpdateTypesEnum from frigate.types import TrackedObjectUpdateTypesEnum
from frigate.util.builtin import EventsPerSecond from frigate.util.builtin import EventsPerSecond, InferenceSpeed
from frigate.util.image import area from frigate.util.image import area
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -36,8 +36,10 @@ WRITE_DEBUG_IMAGES = False
class LicensePlateProcessingMixin: class LicensePlateProcessingMixin:
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
self.plate_rec_speed = InferenceSpeed(self.metrics.alpr_speed)
self.plates_rec_second = EventsPerSecond() self.plates_rec_second = EventsPerSecond()
self.plates_rec_second.start() self.plates_rec_second.start()
self.plate_det_speed = InferenceSpeed(self.metrics.yolov9_lpr_speed)
self.plates_det_second = EventsPerSecond() self.plates_det_second = EventsPerSecond()
self.plates_det_second.start() self.plates_det_second.start()
self.event_metadata_publisher = EventMetadataPublisher() self.event_metadata_publisher = EventMetadataPublisher()
@ -1157,22 +1159,6 @@ class LicensePlateProcessingMixin:
# 5. Return True if previous plate scores higher # 5. Return True if previous plate scores higher
return prev_score > curr_score return prev_score > curr_score
def __update_yolov9_metrics(self, duration: float) -> None:
"""
Update inference metrics.
"""
self.metrics.yolov9_lpr_speed.value = (
self.metrics.yolov9_lpr_speed.value * 9 + duration
) / 10
def __update_lpr_metrics(self, duration: float) -> None:
"""
Update inference metrics.
"""
self.metrics.alpr_speed.value = (
self.metrics.alpr_speed.value * 9 + duration
) / 10
def _generate_plate_event(self, camera: str, plate: str, plate_score: float) -> str: def _generate_plate_event(self, camera: str, plate: str, plate_score: float) -> str:
"""Generate a unique ID for a plate event based on camera and text.""" """Generate a unique ID for a plate event based on camera and text."""
now = datetime.datetime.now().timestamp() now = datetime.datetime.now().timestamp()
@ -1228,7 +1214,7 @@ class LicensePlateProcessingMixin:
f"{camera}: YOLOv9 LPD inference time: {(datetime.datetime.now().timestamp() - yolov9_start) * 1000:.2f} ms" f"{camera}: YOLOv9 LPD inference time: {(datetime.datetime.now().timestamp() - yolov9_start) * 1000:.2f} ms"
) )
self.plates_det_second.update() self.plates_det_second.update()
self.__update_yolov9_metrics( self.plate_det_speed.update(
datetime.datetime.now().timestamp() - yolov9_start datetime.datetime.now().timestamp() - yolov9_start
) )
@ -1319,7 +1305,7 @@ class LicensePlateProcessingMixin:
f"{camera}: YOLOv9 LPD inference time: {(datetime.datetime.now().timestamp() - yolov9_start) * 1000:.2f} ms" f"{camera}: YOLOv9 LPD inference time: {(datetime.datetime.now().timestamp() - yolov9_start) * 1000:.2f} ms"
) )
self.plates_det_second.update() self.plates_det_second.update()
self.__update_yolov9_metrics( self.plate_det_speed.update(
datetime.datetime.now().timestamp() - yolov9_start datetime.datetime.now().timestamp() - yolov9_start
) )
@ -1433,7 +1419,7 @@ class LicensePlateProcessingMixin:
camera, id, license_plate_frame camera, id, license_plate_frame
) )
self.plates_rec_second.update() self.plates_rec_second.update()
self.__update_lpr_metrics(datetime.datetime.now().timestamp() - start) self.plate_rec_speed.update(datetime.datetime.now().timestamp() - start)
if license_plates: if license_plates:
for plate, confidence, text_area in zip(license_plates, confidences, areas): for plate, confidence, text_area in zip(license_plates, confidences, areas):

View File

@ -5,9 +5,7 @@ import datetime
import json import json
import logging import logging
import os import os
import random
import shutil import shutil
import string
from typing import Optional from typing import Optional
import cv2 import cv2
@ -27,7 +25,7 @@ from frigate.data_processing.common.face.model import (
FaceRecognizer, FaceRecognizer,
) )
from frigate.types import TrackedObjectUpdateTypesEnum from frigate.types import TrackedObjectUpdateTypesEnum
from frigate.util.builtin import EventsPerSecond from frigate.util.builtin import EventsPerSecond, InferenceSpeed
from frigate.util.image import area from frigate.util.image import area
from ..types import DataProcessorMetrics from ..types import DataProcessorMetrics
@ -58,6 +56,7 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
self.person_face_history: dict[str, list[tuple[str, float, int]]] = {} self.person_face_history: dict[str, list[tuple[str, float, int]]] = {}
self.recognizer: FaceRecognizer | None = None self.recognizer: FaceRecognizer | None = None
self.faces_per_second = EventsPerSecond() self.faces_per_second = EventsPerSecond()
self.inference_speed = InferenceSpeed(self.metrics.face_rec_speed)
download_path = os.path.join(MODEL_CACHE_DIR, "facedet") download_path = os.path.join(MODEL_CACHE_DIR, "facedet")
self.model_files = { self.model_files = {
@ -155,9 +154,7 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
def __update_metrics(self, duration: float) -> None: def __update_metrics(self, duration: float) -> None:
self.faces_per_second.update() self.faces_per_second.update()
self.metrics.face_rec_speed.value = ( self.inference_speed.update(duration)
self.metrics.face_rec_speed.value * 9 + duration
) / 10
def process_frame(self, obj_data: dict[str, any], frame: np.ndarray): def process_frame(self, obj_data: dict[str, any], frame: np.ndarray):
"""Look for faces in image.""" """Look for faces in image."""
@ -343,11 +340,7 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
return {"success": True, "score": score, "face_name": sub_label} return {"success": True, "score": score, "face_name": sub_label}
elif topic == EmbeddingsRequestEnum.register_face.value: elif topic == EmbeddingsRequestEnum.register_face.value:
rand_id = "".join(
random.choices(string.ascii_lowercase + string.digits, k=6)
)
label = request_data["face_name"] label = request_data["face_name"]
id = f"{label}-{rand_id}"
if request_data.get("cropped"): if request_data.get("cropped"):
thumbnail = request_data["image"] thumbnail = request_data["image"]
@ -376,7 +369,9 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
# write face to library # write face to library
folder = os.path.join(FACE_DIR, label) folder = os.path.join(FACE_DIR, label)
file = os.path.join(folder, f"{id}.webp") file = os.path.join(
folder, f"{label}_{datetime.datetime.now().timestamp()}.webp"
)
os.makedirs(folder, exist_ok=True) os.makedirs(folder, exist_ok=True)
# save face image # save face image

View File

@ -7,7 +7,9 @@ from multiprocessing.sharedctypes import Synchronized
class DataProcessorMetrics: class DataProcessorMetrics:
image_embeddings_speed: Synchronized image_embeddings_speed: Synchronized
image_embeddings_eps: Synchronized
text_embeddings_speed: Synchronized text_embeddings_speed: Synchronized
text_embeddings_eps: Synchronized
face_rec_speed: Synchronized face_rec_speed: Synchronized
face_rec_fps: Synchronized face_rec_fps: Synchronized
alpr_speed: Synchronized alpr_speed: Synchronized
@ -16,15 +18,15 @@ class DataProcessorMetrics:
yolov9_lpr_pps: Synchronized yolov9_lpr_pps: Synchronized
def __init__(self): def __init__(self):
self.image_embeddings_speed = mp.Value("d", 0.01) self.image_embeddings_speed = mp.Value("d", 0.0)
self.image_embeddings_eps = mp.Value("d", 0.0) self.image_embeddings_eps = mp.Value("d", 0.0)
self.text_embeddings_speed = mp.Value("d", 0.01) self.text_embeddings_speed = mp.Value("d", 0.0)
self.text_embeddings_eps = mp.Value("d", 0.0) self.text_embeddings_eps = mp.Value("d", 0.0)
self.face_rec_speed = mp.Value("d", 0.01) self.face_rec_speed = mp.Value("d", 0.0)
self.face_rec_fps = mp.Value("d", 0.0) self.face_rec_fps = mp.Value("d", 0.0)
self.alpr_speed = mp.Value("d", 0.01) self.alpr_speed = mp.Value("d", 0.0)
self.alpr_pps = mp.Value("d", 0.0) self.alpr_pps = mp.Value("d", 0.0)
self.yolov9_lpr_speed = mp.Value("d", 0.01) self.yolov9_lpr_speed = mp.Value("d", 0.0)
self.yolov9_lpr_pps = mp.Value("d", 0.0) self.yolov9_lpr_pps = mp.Value("d", 0.0)

View File

@ -126,6 +126,9 @@ class ModelConfig(BaseModel):
if not self.path or not self.path.startswith("plus://"): if not self.path or not self.path.startswith("plus://"):
return return
# ensure that model cache dir exists
os.makedirs(MODEL_CACHE_DIR, exist_ok=True)
model_id = self.path[7:] model_id = self.path[7:]
self.path = os.path.join(MODEL_CACHE_DIR, model_id) self.path = os.path.join(MODEL_CACHE_DIR, model_id)
model_info_path = f"{self.path}.json" model_info_path = f"{self.path}.json"

View File

@ -235,7 +235,7 @@ class EmbeddingsContext:
if os.path.isfile(file_path): if os.path.isfile(file_path):
os.unlink(file_path) os.unlink(file_path)
if len(os.listdir(folder)) == 0: if face != "train" and len(os.listdir(folder)) == 0:
os.rmdir(folder) os.rmdir(folder)
self.requestor.send_data( self.requestor.send_data(

View File

@ -21,7 +21,7 @@ from frigate.data_processing.types import DataProcessorMetrics
from frigate.db.sqlitevecq import SqliteVecQueueDatabase from frigate.db.sqlitevecq import SqliteVecQueueDatabase
from frigate.models import Event from frigate.models import Event
from frigate.types import ModelStatusTypesEnum from frigate.types import ModelStatusTypesEnum
from frigate.util.builtin import EventsPerSecond, serialize from frigate.util.builtin import EventsPerSecond, InferenceSpeed, serialize
from frigate.util.path import get_event_thumbnail_bytes from frigate.util.path import get_event_thumbnail_bytes
from .onnx.jina_v1_embedding import JinaV1ImageEmbedding, JinaV1TextEmbedding from .onnx.jina_v1_embedding import JinaV1ImageEmbedding, JinaV1TextEmbedding
@ -75,8 +75,10 @@ class Embeddings:
self.metrics = metrics self.metrics = metrics
self.requestor = InterProcessRequestor() self.requestor = InterProcessRequestor()
self.image_inference_speed = InferenceSpeed(self.metrics.image_embeddings_speed)
self.image_eps = EventsPerSecond() self.image_eps = EventsPerSecond()
self.image_eps.start() self.image_eps.start()
self.text_inference_speed = InferenceSpeed(self.metrics.text_embeddings_speed)
self.text_eps = EventsPerSecond() self.text_eps = EventsPerSecond()
self.text_eps.start() self.text_eps.start()
@ -183,10 +185,7 @@ class Embeddings:
(event_id, serialize(embedding)), (event_id, serialize(embedding)),
) )
duration = datetime.datetime.now().timestamp() - start self.image_inference_speed.update(datetime.datetime.now().timestamp() - start)
self.metrics.image_embeddings_speed.value = (
self.metrics.image_embeddings_speed.value * 9 + duration
) / 10
self.image_eps.update() self.image_eps.update()
return embedding return embedding
@ -220,9 +219,7 @@ class Embeddings:
) )
duration = datetime.datetime.now().timestamp() - start duration = datetime.datetime.now().timestamp() - start
self.metrics.text_embeddings_speed.value = ( self.text_inference_speed.update(duration / len(ids))
self.metrics.text_embeddings_speed.value * 9 + (duration / len(ids))
) / 10
return embeddings return embeddings
@ -241,10 +238,7 @@ class Embeddings:
(event_id, serialize(embedding)), (event_id, serialize(embedding)),
) )
duration = datetime.datetime.now().timestamp() - start self.text_inference_speed.update(datetime.datetime.now().timestamp() - start)
self.metrics.text_embeddings_speed.value = (
self.metrics.text_embeddings_speed.value * 9 + duration
) / 10
self.text_eps.update() self.text_eps.update()
return embedding return embedding
@ -276,10 +270,7 @@ class Embeddings:
items, items,
) )
duration = datetime.datetime.now().timestamp() - start self.text_inference_speed.update(datetime.datetime.now().timestamp() - start)
self.metrics.text_embeddings_speed.value = (
self.metrics.text_embeddings_speed.value * 9 + (duration / len(ids))
) / 10
return embeddings return embeddings

View File

@ -23,10 +23,7 @@ FACENET_INPUT_SIZE = 160
class FaceNetEmbedding(BaseEmbedding): class FaceNetEmbedding(BaseEmbedding):
def __init__( def __init__(self):
self,
device: str = "AUTO",
):
super().__init__( super().__init__(
model_name="facedet", model_name="facedet",
model_file="facenet.tflite", model_file="facenet.tflite",
@ -34,7 +31,6 @@ class FaceNetEmbedding(BaseEmbedding):
"facenet.tflite": "https://github.com/NickM-27/facenet-onnx/releases/download/v1.0/facenet.tflite", "facenet.tflite": "https://github.com/NickM-27/facenet-onnx/releases/download/v1.0/facenet.tflite",
}, },
) )
self.device = device
self.download_path = os.path.join(MODEL_CACHE_DIR, self.model_name) self.download_path = os.path.join(MODEL_CACHE_DIR, self.model_name)
self.tokenizer = None self.tokenizer = None
self.feature_extractor = None self.feature_extractor = None
@ -113,10 +109,7 @@ class FaceNetEmbedding(BaseEmbedding):
class ArcfaceEmbedding(BaseEmbedding): class ArcfaceEmbedding(BaseEmbedding):
def __init__( def __init__(self):
self,
device: str = "AUTO",
):
super().__init__( super().__init__(
model_name="facedet", model_name="facedet",
model_file="arcface.onnx", model_file="arcface.onnx",
@ -124,7 +117,6 @@ class ArcfaceEmbedding(BaseEmbedding):
"arcface.onnx": "https://github.com/NickM-27/facenet-onnx/releases/download/v1.0/arcface.onnx", "arcface.onnx": "https://github.com/NickM-27/facenet-onnx/releases/download/v1.0/arcface.onnx",
}, },
) )
self.device = device
self.download_path = os.path.join(MODEL_CACHE_DIR, self.model_name) self.download_path = os.path.join(MODEL_CACHE_DIR, self.model_name)
self.tokenizer = None self.tokenizer = None
self.feature_extractor = None self.feature_extractor = None
@ -154,7 +146,7 @@ class ArcfaceEmbedding(BaseEmbedding):
self.runner = ONNXModelRunner( self.runner = ONNXModelRunner(
os.path.join(self.download_path, self.model_file), os.path.join(self.download_path, self.model_file),
self.device, "GPU",
) )
def _preprocess_inputs(self, raw_inputs): def _preprocess_inputs(self, raw_inputs):

View File

@ -1,5 +1,6 @@
"""Maintain review segments in db.""" """Maintain review segments in db."""
import copy
import json import json
import logging import logging
import os import os
@ -119,21 +120,23 @@ class PendingReviewSegment:
) )
def get_data(self, ended: bool) -> dict: def get_data(self, ended: bool) -> dict:
return { return copy.deepcopy(
ReviewSegment.id.name: self.id, {
ReviewSegment.camera.name: self.camera, ReviewSegment.id.name: self.id,
ReviewSegment.start_time.name: self.start_time, ReviewSegment.camera.name: self.camera,
ReviewSegment.end_time.name: self.last_update if ended else None, ReviewSegment.start_time.name: self.start_time,
ReviewSegment.severity.name: self.severity.value, ReviewSegment.end_time.name: self.last_update if ended else None,
ReviewSegment.thumb_path.name: self.frame_path, ReviewSegment.severity.name: self.severity.value,
ReviewSegment.data.name: { ReviewSegment.thumb_path.name: self.frame_path,
"detections": list(set(self.detections.keys())), ReviewSegment.data.name: {
"objects": list(set(self.detections.values())), "detections": list(set(self.detections.keys())),
"sub_labels": list(self.sub_labels.values()), "objects": list(set(self.detections.values())),
"zones": self.zones, "sub_labels": list(self.sub_labels.values()),
"audio": list(self.audio), "zones": self.zones,
}, "audio": list(self.audio),
}.copy() },
}
)
class ReviewSegmentMaintainer(threading.Thread): class ReviewSegmentMaintainer(threading.Thread):

View File

@ -154,7 +154,7 @@ class TrackedObject:
"attributes": obj_data["attributes"], "attributes": obj_data["attributes"],
"current_estimated_speed": self.current_estimated_speed, "current_estimated_speed": self.current_estimated_speed,
"velocity_angle": self.velocity_angle, "velocity_angle": self.velocity_angle,
"path_data": self.path_data, "path_data": self.path_data.copy(),
"recognized_license_plate": obj_data.get( "recognized_license_plate": obj_data.get(
"recognized_license_plate" "recognized_license_plate"
), ),
@ -378,7 +378,7 @@ class TrackedObject:
"current_estimated_speed": self.current_estimated_speed, "current_estimated_speed": self.current_estimated_speed,
"average_estimated_speed": self.average_estimated_speed, "average_estimated_speed": self.average_estimated_speed,
"velocity_angle": self.velocity_angle, "velocity_angle": self.velocity_angle,
"path_data": self.path_data, "path_data": self.path_data.copy(),
"recognized_license_plate": self.obj_data.get("recognized_license_plate"), "recognized_license_plate": self.obj_data.get("recognized_license_plate"),
} }

View File

@ -11,6 +11,7 @@ import shlex
import struct import struct
import urllib.parse import urllib.parse
from collections.abc import Mapping from collections.abc import Mapping
from multiprocessing.sharedctypes import Synchronized
from pathlib import Path from pathlib import Path
from typing import Any, Optional, Tuple, Union from typing import Any, Optional, Tuple, Union
from zoneinfo import ZoneInfoNotFoundError from zoneinfo import ZoneInfoNotFoundError
@ -26,16 +27,16 @@ logger = logging.getLogger(__name__)
class EventsPerSecond: class EventsPerSecond:
def __init__(self, max_events=1000, last_n_seconds=10): def __init__(self, max_events=1000, last_n_seconds=10) -> None:
self._start = None self._start = None
self._max_events = max_events self._max_events = max_events
self._last_n_seconds = last_n_seconds self._last_n_seconds = last_n_seconds
self._timestamps = [] self._timestamps = []
def start(self): def start(self) -> None:
self._start = datetime.datetime.now().timestamp() self._start = datetime.datetime.now().timestamp()
def update(self): def update(self) -> None:
now = datetime.datetime.now().timestamp() now = datetime.datetime.now().timestamp()
if self._start is None: if self._start is None:
self._start = now self._start = now
@ -45,7 +46,7 @@ class EventsPerSecond:
self._timestamps = self._timestamps[(1 - self._max_events) :] self._timestamps = self._timestamps[(1 - self._max_events) :]
self.expire_timestamps(now) self.expire_timestamps(now)
def eps(self): def eps(self) -> float:
now = datetime.datetime.now().timestamp() now = datetime.datetime.now().timestamp()
if self._start is None: if self._start is None:
self._start = now self._start = now
@ -58,12 +59,29 @@ class EventsPerSecond:
return len(self._timestamps) / seconds return len(self._timestamps) / seconds
# remove aged out timestamps # remove aged out timestamps
def expire_timestamps(self, now): def expire_timestamps(self, now: float) -> None:
threshold = now - self._last_n_seconds threshold = now - self._last_n_seconds
while self._timestamps and self._timestamps[0] < threshold: while self._timestamps and self._timestamps[0] < threshold:
del self._timestamps[0] del self._timestamps[0]
class InferenceSpeed:
def __init__(self, metric: Synchronized) -> None:
self.__metric = metric
self.__initialized = False
def update(self, inference_time: float) -> None:
if not self.__initialized:
self.__metric.value = inference_time
self.__initialized = True
return
self.__metric.value = (self.__metric.value * 9 + inference_time) / 10
def current(self) -> float:
return self.__metric.value
def deep_merge(dct1: dict, dct2: dict, override=False, merge_lists=False) -> dict: def deep_merge(dct1: dict, dct2: dict, override=False, merge_lists=False) -> dict:
""" """
:param dct1: First dict to merge :param dct1: First dict to merge

View File

@ -8,14 +8,16 @@
"subLabelScore": "Sub Label Score", "subLabelScore": "Sub Label Score",
"scoreInfo": "The sub label score is the weighted score for all of the recognized face confidences, so this may differ from the score shown on the snapshot.", "scoreInfo": "The sub label score is the weighted score for all of the recognized face confidences, so this may differ from the score shown on the snapshot.",
"face": "Face Details", "face": "Face Details",
"faceDesc": "Details for the face and associated object", "faceDesc": "Details of the tracked object that generated this face",
"timestamp": "Timestamp" "timestamp": "Timestamp",
"unknown": "Unknown"
}, },
"documentTitle": "Face Library - Frigate", "documentTitle": "Face Library - Frigate",
"uploadFaceImage": { "uploadFaceImage": {
"title": "Upload Face Image", "title": "Upload Face Image",
"desc": "Upload an image to scan for faces and include for {{pageToggle}}" "desc": "Upload an image to scan for faces and include for {{pageToggle}}"
}, },
"collections": "Collections",
"createFaceLibrary": { "createFaceLibrary": {
"title": "Create Collection", "title": "Create Collection",
"desc": "Create a new collection", "desc": "Create a new collection",
@ -25,7 +27,10 @@
"steps": { "steps": {
"faceName": "Enter Face Name", "faceName": "Enter Face Name",
"uploadFace": "Upload Face Image", "uploadFace": "Upload Face Image",
"nextSteps": "Next Steps" "nextSteps": "Next Steps",
"description": {
"uploadFace": "Upload an image of {{name}} that shows their face from a front-facing angle. The image does not need to be cropped to just their face."
}
}, },
"train": { "train": {
"title": "Train", "title": "Train",
@ -38,12 +43,17 @@
"title": "Delete Name", "title": "Delete Name",
"desc": "Are you sure you want to delete the collection {{name}}? This will permanently delete all associated faces." "desc": "Are you sure you want to delete the collection {{name}}? This will permanently delete all associated faces."
}, },
"deleteFaceAttempts": {
"title": "Delete Faces",
"desc_one": "Are you sure you want to delete {{count}} face? This action cannot be undone.",
"desc_other": "Are you sure you want to delete {{count}} faces? This action cannot be undone."
},
"renameFace": { "renameFace": {
"title": "Rename Face", "title": "Rename Face",
"desc": "Enter a new name for {{name}}" "desc": "Enter a new name for {{name}}"
}, },
"button": { "button": {
"deleteFaceAttempts": "Delete Face Attempts", "deleteFaceAttempts": "Delete Faces",
"addFace": "Add Face", "addFace": "Add Face",
"renameFace": "Rename Face", "renameFace": "Rename Face",
"deleteFace": "Delete Face", "deleteFace": "Delete Face",

View File

@ -84,6 +84,7 @@
}, },
"classification": { "classification": {
"title": "Classification Settings", "title": "Classification Settings",
"unsavedChanges": "Unsaved Classification settings changes",
"birdClassification": { "birdClassification": {
"title": "Bird Classification", "title": "Bird Classification",
"desc": "Bird classification identifies known birds using a quantized Tensorflow model. When a known bird is recognized, its common name will be added as a sub_label. This information is included in the UI, filters, as well as in notifications." "desc": "Bird classification identifies known birds using a quantized Tensorflow model. When a known bird is recognized, its common name will be added as a sub_label. This information is included in the UI, filters, as well as in notifications."
@ -168,11 +169,12 @@
"notSelectDetections": "All {{detectionsLabels}} objects detected in {{zone}} on {{cameraName}} not categorized as Alerts will be shown as Detections regardless of which zone they are in.", "notSelectDetections": "All {{detectionsLabels}} objects detected in {{zone}} on {{cameraName}} not categorized as Alerts will be shown as Detections regardless of which zone they are in.",
"regardlessOfZoneObjectDetectionsTips": "All {{detectionsLabels}} objects not categorized on {{cameraName}} will be shown as Detections regardless of which zone they are in." "regardlessOfZoneObjectDetectionsTips": "All {{detectionsLabels}} objects not categorized on {{cameraName}} will be shown as Detections regardless of which zone they are in."
}, },
"unsavedChanges": "Unsaved Review Classification settings for {{camera}}",
"selectAlertsZones": "Select zones for Alerts", "selectAlertsZones": "Select zones for Alerts",
"selectDetectionsZones": "Select zones for Detections", "selectDetectionsZones": "Select zones for Detections",
"limitDetections": "Limit detections to specific zones", "limitDetections": "Limit detections to specific zones",
"toast": { "toast": {
"success": "Review classification configuration has been saved. Restart Frigate to apply changes." "success": "Review Classification configuration has been saved. Restart Frigate to apply changes."
} }
} }
}, },
@ -338,6 +340,7 @@
}, },
"motionDetectionTuner": { "motionDetectionTuner": {
"title": "Motion Detection Tuner", "title": "Motion Detection Tuner",
"unsavedChanges": "Unsaved Motion Tuner changes ({{camera}})",
"desc": { "desc": {
"title": "Frigate uses motion detection as a first line check to see if there is anything happening in the frame worth checking with object detection.", "title": "Frigate uses motion detection as a first line check to see if there is anything happening in the frame worth checking with object detection.",
"documentation": "Read the Motion Tuning Guide" "documentation": "Read the Motion Tuning Guide"
@ -527,6 +530,8 @@
"registerDevice": "Register This Device", "registerDevice": "Register This Device",
"unregisterDevice": "Unregister This Device", "unregisterDevice": "Unregister This Device",
"sendTestNotification": "Send a test notification", "sendTestNotification": "Send a test notification",
"unsavedRegistrations": "Unsaved Notification registrations",
"unsavedChanges": "Unsaved Notification changes",
"active": "Notifications Active", "active": "Notifications Active",
"suspended": "Notifications suspended {{time}}", "suspended": "Notifications suspended {{time}}",
"suspendTime": { "suspendTime": {
@ -587,6 +592,7 @@
"loadingAvailableModels": "Loading available models…", "loadingAvailableModels": "Loading available models…",
"modelSelect": "Your available models on Frigate+ can be selected here. Note that only models compatible with your current detector configuration can be selected." "modelSelect": "Your available models on Frigate+ can be selected here. Note that only models compatible with your current detector configuration can be selected."
}, },
"unsavedChanges": "Unsaved Frigate+ settings changes",
"restart_required": "Restart required (Frigate+ model changed)", "restart_required": "Restart required (Frigate+ model changed)",
"toast": { "toast": {
"success": "Frigate+ settings have been saved. Restart Frigate to apply changes.", "success": "Frigate+ settings have been saved. Restart Frigate to apply changes.",

View File

@ -128,13 +128,18 @@ export default function CreateFaceWizardDialog({
</TextEntry> </TextEntry>
)} )}
{step == 1 && ( {step == 1 && (
<ImageEntry onSave={onUploadImage}> <>
<div className="flex justify-end py-2"> <div className="px-8 py-2 text-center text-sm text-secondary-foreground">
<Button variant="select" type="submit"> {t("steps.description.uploadFace", { name })}
{t("button.next", { ns: "common" })}
</Button>
</div> </div>
</ImageEntry> <ImageEntry onSave={onUploadImage}>
<div className="flex justify-end py-2">
<Button variant="select" type="submit">
{t("button.next", { ns: "common" })}
</Button>
</div>
</ImageEntry>
</>
)} )}
{step == 2 && ( {step == 2 && (
<div className="mt-2"> <div className="mt-2">

View File

@ -23,6 +23,7 @@ import {
import { useTranslation } from "react-i18next"; import { useTranslation } from "react-i18next";
type PreviewPlayerProps = { type PreviewPlayerProps = {
previewRef?: (ref: HTMLDivElement | null) => void;
className?: string; className?: string;
camera: string; camera: string;
timeRange: TimeRange; timeRange: TimeRange;
@ -30,16 +31,19 @@ type PreviewPlayerProps = {
startTime?: number; startTime?: number;
isScrubbing: boolean; isScrubbing: boolean;
forceAspect?: number; forceAspect?: number;
isVisible?: boolean;
onControllerReady: (controller: PreviewController) => void; onControllerReady: (controller: PreviewController) => void;
onClick?: () => void; onClick?: () => void;
}; };
export default function PreviewPlayer({ export default function PreviewPlayer({
previewRef,
className, className,
camera, camera,
timeRange, timeRange,
cameraPreviews, cameraPreviews,
startTime, startTime,
isScrubbing, isScrubbing,
isVisible = true,
onControllerReady, onControllerReady,
onClick, onClick,
}: PreviewPlayerProps) { }: PreviewPlayerProps) {
@ -54,6 +58,7 @@ export default function PreviewPlayer({
if (currentPreview) { if (currentPreview) {
return ( return (
<PreviewVideoPlayer <PreviewVideoPlayer
visibilityRef={previewRef}
className={className} className={className}
camera={camera} camera={camera}
timeRange={timeRange} timeRange={timeRange}
@ -61,6 +66,7 @@ export default function PreviewPlayer({
initialPreview={currentPreview} initialPreview={currentPreview}
startTime={startTime} startTime={startTime}
isScrubbing={isScrubbing} isScrubbing={isScrubbing}
isVisible={isVisible}
currentHourFrame={currentHourFrame} currentHourFrame={currentHourFrame}
onControllerReady={onControllerReady} onControllerReady={onControllerReady}
onClick={onClick} onClick={onClick}
@ -110,6 +116,7 @@ export abstract class PreviewController {
} }
type PreviewVideoPlayerProps = { type PreviewVideoPlayerProps = {
visibilityRef?: (ref: HTMLDivElement | null) => void;
className?: string; className?: string;
camera: string; camera: string;
timeRange: TimeRange; timeRange: TimeRange;
@ -117,12 +124,14 @@ type PreviewVideoPlayerProps = {
initialPreview?: Preview; initialPreview?: Preview;
startTime?: number; startTime?: number;
isScrubbing: boolean; isScrubbing: boolean;
isVisible: boolean;
currentHourFrame?: string; currentHourFrame?: string;
onControllerReady: (controller: PreviewVideoController) => void; onControllerReady: (controller: PreviewVideoController) => void;
onClick?: () => void; onClick?: () => void;
setCurrentHourFrame: (src: string | undefined) => void; setCurrentHourFrame: (src: string | undefined) => void;
}; };
function PreviewVideoPlayer({ function PreviewVideoPlayer({
visibilityRef,
className, className,
camera, camera,
timeRange, timeRange,
@ -130,6 +139,7 @@ function PreviewVideoPlayer({
initialPreview, initialPreview,
startTime, startTime,
isScrubbing, isScrubbing,
isVisible,
currentHourFrame, currentHourFrame,
onControllerReady, onControllerReady,
onClick, onClick,
@ -267,11 +277,13 @@ function PreviewVideoPlayer({
return ( return (
<div <div
ref={visibilityRef}
className={cn( className={cn(
"relative flex w-full justify-center overflow-hidden rounded-lg bg-black md:rounded-2xl", "relative flex w-full justify-center overflow-hidden rounded-lg bg-black md:rounded-2xl",
onClick && "cursor-pointer", onClick && "cursor-pointer",
className, className,
)} )}
data-camera={camera}
onClick={onClick} onClick={onClick}
> >
<img <img
@ -286,45 +298,48 @@ function PreviewVideoPlayer({
previewRef.current?.load(); previewRef.current?.load();
}} }}
/> />
<video {isVisible && (
ref={previewRef} <video
className={`absolute size-full ${currentHourFrame ? "invisible" : "visible"}`} ref={previewRef}
preload="auto" className={`absolute size-full ${currentHourFrame ? "invisible" : "visible"}`}
autoPlay preload="auto"
playsInline autoPlay
muted playsInline
disableRemotePlayback muted
onSeeked={onPreviewSeeked} disableRemotePlayback
onLoadedData={() => { onSeeked={onPreviewSeeked}
if (firstLoad) { onLoadedData={() => {
setFirstLoad(false); if (firstLoad) {
} setFirstLoad(false);
if (controller) {
controller.previewReady();
} else {
previewRef.current?.pause();
}
if (previewRef.current) {
setVideoSize([
previewRef.current.videoWidth,
previewRef.current.videoHeight,
]);
if (startTime && currentPreview) {
previewRef.current.currentTime = startTime - currentPreview.start;
} }
}
}} if (controller) {
> controller.previewReady();
{currentPreview != undefined && ( } else {
<source previewRef.current?.pause();
src={`${baseUrl}${currentPreview.src.substring(1)}`} }
type={currentPreview.type}
/> if (previewRef.current) {
)} setVideoSize([
</video> previewRef.current.videoWidth,
previewRef.current.videoHeight,
]);
if (startTime && currentPreview) {
previewRef.current.currentTime =
startTime - currentPreview.start;
}
}
}}
>
{currentPreview != undefined && (
<source
src={`${baseUrl}${currentPreview.src.substring(1)}`}
type={currentPreview.type}
/>
)}
</video>
)}
{cameraPreviews && !currentPreview && ( {cameraPreviews && !currentPreview && (
<div className="absolute inset-0 flex items-center justify-center rounded-lg bg-background_alt text-primary dark:bg-black md:rounded-2xl"> <div className="absolute inset-0 flex items-center justify-center rounded-lg bg-background_alt text-primary dark:bg-black md:rounded-2xl">
{t("noPreviewFoundFor", { camera: camera.replaceAll("_", " ") })} {t("noPreviewFoundFor", { camera: camera.replaceAll("_", " ") })}

View File

@ -143,6 +143,12 @@ function ConfigEditor() {
scrollBeyondLastLine: false, scrollBeyondLastLine: false,
theme: (systemTheme || theme) == "dark" ? "vs-dark" : "vs-light", theme: (systemTheme || theme) == "dark" ? "vs-dark" : "vs-light",
}); });
editorRef.current?.addCommand(
monaco.KeyMod.CtrlCmd | monaco.KeyCode.KeyS,
() => {
onHandleSaveConfig("saveonly");
},
);
} else if (editorRef.current) { } else if (editorRef.current) {
editorRef.current.setModel(modelRef.current); editorRef.current.setModel(modelRef.current);
} }
@ -158,7 +164,7 @@ function ConfigEditor() {
} }
schemaConfiguredRef.current = false; schemaConfiguredRef.current = false;
}; };
}, [config, apiHost, systemTheme, theme]); }, [config, apiHost, systemTheme, theme, onHandleSaveConfig]);
// monitoring state // monitoring state

View File

@ -6,7 +6,17 @@ import CreateFaceWizardDialog from "@/components/overlay/detail/FaceCreateWizard
import TextEntryDialog from "@/components/overlay/dialog/TextEntryDialog"; import TextEntryDialog from "@/components/overlay/dialog/TextEntryDialog";
import UploadImageDialog from "@/components/overlay/dialog/UploadImageDialog"; import UploadImageDialog from "@/components/overlay/dialog/UploadImageDialog";
import FaceSelectionDialog from "@/components/overlay/FaceSelectionDialog"; import FaceSelectionDialog from "@/components/overlay/FaceSelectionDialog";
import { Button } from "@/components/ui/button"; import { Button, buttonVariants } from "@/components/ui/button";
import {
AlertDialog,
AlertDialogAction,
AlertDialogCancel,
AlertDialogContent,
AlertDialogDescription,
AlertDialogFooter,
AlertDialogHeader,
AlertDialogTitle,
} from "@/components/ui/alert-dialog";
import { import {
Dialog, Dialog,
DialogContent, DialogContent,
@ -44,7 +54,7 @@ import { TooltipPortal } from "@radix-ui/react-tooltip";
import axios from "axios"; import axios from "axios";
import { useCallback, useEffect, useMemo, useRef, useState } from "react"; import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import { isDesktop, isMobile } from "react-device-detect"; import { isDesktop, isMobile } from "react-device-detect";
import { useTranslation } from "react-i18next"; import { Trans, useTranslation } from "react-i18next";
import { import {
LuFolderCheck, LuFolderCheck,
LuImagePlus, LuImagePlus,
@ -165,6 +175,11 @@ export default function FaceLibrary() {
[selectedFaces, setSelectedFaces], [selectedFaces, setSelectedFaces],
); );
const [deleteDialogOpen, setDeleteDialogOpen] = useState<{
name: string;
ids: string[];
} | null>(null);
const onDelete = useCallback( const onDelete = useCallback(
(name: string, ids: string[], isName: boolean = false) => { (name: string, ids: string[], isName: boolean = false) => {
axios axios
@ -191,7 +206,7 @@ export default function FaceLibrary() {
if (faceImages.length == 1) { if (faceImages.length == 1) {
// face has been deleted // face has been deleted
setPageToggle(""); setPageToggle("train");
} }
refreshFaces(); refreshFaces();
@ -244,29 +259,32 @@ export default function FaceLibrary() {
// keyboard // keyboard
useKeyboardListener( useKeyboardListener(["a", "Escape"], (key, modifiers) => {
page === "train" ? ["a", "Escape"] : [], if (modifiers.repeat || !modifiers.down) {
(key, modifiers) => { return;
if (modifiers.repeat || !modifiers.down) { }
return;
}
switch (key) { switch (key) {
case "a": case "a":
if (modifiers.ctrl) { if (modifiers.ctrl) {
if (selectedFaces.length) { if (selectedFaces.length) {
setSelectedFaces([]); setSelectedFaces([]);
} else { } else {
setSelectedFaces([...trainImages]); setSelectedFaces([
} ...(pageToggle === "train" ? trainImages : faceImages),
]);
} }
break; }
case "Escape": break;
setSelectedFaces([]); case "Escape":
break; setSelectedFaces([]);
} break;
}, }
); });
useEffect(() => {
setSelectedFaces([]);
}, [pageToggle]);
if (!config) { if (!config) {
return <ActivityIndicator />; return <ActivityIndicator />;
@ -276,6 +294,41 @@ export default function FaceLibrary() {
<div className="flex size-full flex-col p-2"> <div className="flex size-full flex-col p-2">
<Toaster /> <Toaster />
<AlertDialog
open={!!deleteDialogOpen}
onOpenChange={() => setDeleteDialogOpen(null)}
>
<AlertDialogContent>
<AlertDialogHeader>
<AlertDialogTitle>{t("deleteFaceAttempts.title")}</AlertDialogTitle>
</AlertDialogHeader>
<AlertDialogDescription>
<Trans
ns="views/faceLibrary"
values={{ count: deleteDialogOpen?.ids.length }}
>
deleteFaceAttempts.desc
</Trans>
</AlertDialogDescription>
<AlertDialogFooter>
<AlertDialogCancel>
{t("button.cancel", { ns: "common" })}
</AlertDialogCancel>
<AlertDialogAction
className={buttonVariants({ variant: "destructive" })}
onClick={() => {
if (deleteDialogOpen) {
onDelete(deleteDialogOpen.name, deleteDialogOpen.ids);
setDeleteDialogOpen(null);
}
}}
>
{t("button.delete", { ns: "common" })}
</AlertDialogAction>
</AlertDialogFooter>
</AlertDialogContent>
</AlertDialog>
<UploadImageDialog <UploadImageDialog
open={upload} open={upload}
title={t("uploadFaceImage.title")} title={t("uploadFaceImage.title")}
@ -314,7 +367,9 @@ export default function FaceLibrary() {
</div> </div>
<Button <Button
className="flex gap-2" className="flex gap-2"
onClick={() => onDelete("train", selectedFaces)} onClick={() =>
setDeleteDialogOpen({ name: pageToggle, ids: selectedFaces })
}
> >
<LuTrash2 className="size-7 rounded-md p-1 text-secondary-foreground" /> <LuTrash2 className="size-7 rounded-md p-1 text-secondary-foreground" />
{isDesktop && t("button.deleteFaceAttempts")} {isDesktop && t("button.deleteFaceAttempts")}
@ -335,7 +390,13 @@ export default function FaceLibrary() {
</div> </div>
)} )}
</div> </div>
{pageToggle && {pageToggle && faceImages.length === 0 && pageToggle !== "train" ? (
<div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center text-center">
<LuFolderCheck className="size-16" />
No faces available
</div>
) : (
pageToggle &&
(pageToggle == "train" ? ( (pageToggle == "train" ? (
<TrainingGrid <TrainingGrid
config={config} config={config}
@ -349,9 +410,12 @@ export default function FaceLibrary() {
<FaceGrid <FaceGrid
faceImages={faceImages} faceImages={faceImages}
pageToggle={pageToggle} pageToggle={pageToggle}
selectedFaces={selectedFaces}
onClickFaces={onClickFaces}
onDelete={onDelete} onDelete={onDelete}
/> />
))} ))
)}
</div> </div>
); );
} }
@ -443,7 +507,7 @@ function LibrarySelector({
<DropdownMenu> <DropdownMenu>
<DropdownMenuTrigger asChild> <DropdownMenuTrigger asChild>
<Button className="flex justify-between smart-capitalize"> <Button className="flex justify-between smart-capitalize">
{pageToggle || t("selectFace")} {pageToggle == "train" ? t("train.title") : pageToggle}
<span className="ml-2 text-primary-variant"> <span className="ml-2 text-primary-variant">
({(pageToggle && faceData?.[pageToggle]?.length) || 0}) ({(pageToggle && faceData?.[pageToggle]?.length) || 0})
</span> </span>
@ -467,7 +531,7 @@ function LibrarySelector({
<> <>
<DropdownMenuSeparator /> <DropdownMenuSeparator />
<div className="mb-1 ml-1.5 text-xs text-secondary-foreground"> <div className="mb-1 ml-1.5 text-xs text-secondary-foreground">
Collections {t("collections")}
</div> </div>
</> </>
)} )}
@ -644,7 +708,7 @@ function TrainingGrid({
<div className="flex flex-col gap-1.5"> <div className="flex flex-col gap-1.5">
<div className="text-sm text-primary/40">{t("details.person")}</div> <div className="text-sm text-primary/40">{t("details.person")}</div>
<div className="text-sm smart-capitalize"> <div className="text-sm smart-capitalize">
{selectedEvent?.sub_label ?? "Unknown"} {selectedEvent?.sub_label ?? t("details.unknown")}
</div> </div>
</div> </div>
{selectedEvent?.data.sub_label_score && ( {selectedEvent?.data.sub_label_score && (
@ -793,7 +857,7 @@ function FaceAttemptGroup({
Person Person
{event?.sub_label {event?.sub_label
? `: ${event.sub_label} (${Math.round((event.data.sub_label_score || 0) * 100)}%)` ? `: ${event.sub_label} (${Math.round((event.data.sub_label_score || 0) * 100)}%)`
: ": Unknown"} : ": " + t("details.unknown")}
</div> </div>
{event && ( {event && (
<Tooltip> <Tooltip>
@ -968,7 +1032,9 @@ function FaceAttempt({
<div className="select-none p-2"> <div className="select-none p-2">
<div className="flex w-full flex-row items-center justify-between gap-2"> <div className="flex w-full flex-row items-center justify-between gap-2">
<div className="flex flex-col items-start text-xs text-primary-variant"> <div className="flex flex-col items-start text-xs text-primary-variant">
<div className="smart-capitalize">{data.name}</div> <div className="smart-capitalize">
{data.name == "unknown" ? t("details.unknown") : data.name}
</div>
<div <div
className={cn( className={cn(
"", "",
@ -1007,16 +1073,36 @@ function FaceAttempt({
type FaceGridProps = { type FaceGridProps = {
faceImages: string[]; faceImages: string[];
pageToggle: string; pageToggle: string;
selectedFaces: string[];
onClickFaces: (images: string[], ctrl: boolean) => void;
onDelete: (name: string, ids: string[]) => void; onDelete: (name: string, ids: string[]) => void;
}; };
function FaceGrid({ faceImages, pageToggle, onDelete }: FaceGridProps) { function FaceGrid({
const sortedFaces = useMemo(() => faceImages.sort().reverse(), [faceImages]); faceImages,
pageToggle,
selectedFaces,
onClickFaces,
onDelete,
}: FaceGridProps) {
const sortedFaces = useMemo(
() => (faceImages || []).sort().reverse(),
[faceImages],
);
if (sortedFaces.length === 0) {
return (
<div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center text-center">
<LuFolderCheck className="size-16" />
No faces available
</div>
);
}
return ( return (
<div <div
className={cn( className={cn(
"scrollbar-container gap-2 overflow-y-scroll", "scrollbar-container gap-2 overflow-y-scroll p-1",
isDesktop ? "flex flex-wrap" : "grid grid-cols-2", isDesktop ? "flex flex-wrap" : "grid grid-cols-2 md:grid-cols-4",
)} )}
> >
{sortedFaces.map((image: string) => ( {sortedFaces.map((image: string) => (
@ -1024,6 +1110,8 @@ function FaceGrid({ faceImages, pageToggle, onDelete }: FaceGridProps) {
key={image} key={image}
name={pageToggle} name={pageToggle}
image={image} image={image}
selected={selectedFaces.includes(image)}
onClickFaces={onClickFaces}
onDelete={onDelete} onDelete={onDelete}
/> />
))} ))}
@ -1034,22 +1122,44 @@ function FaceGrid({ faceImages, pageToggle, onDelete }: FaceGridProps) {
type FaceImageProps = { type FaceImageProps = {
name: string; name: string;
image: string; image: string;
selected: boolean;
onClickFaces: (images: string[], ctrl: boolean) => void;
onDelete: (name: string, ids: string[]) => void; onDelete: (name: string, ids: string[]) => void;
}; };
function FaceImage({ name, image, onDelete }: FaceImageProps) { function FaceImage({
name,
image,
selected,
onClickFaces,
onDelete,
}: FaceImageProps) {
const { t } = useTranslation(["views/faceLibrary"]); const { t } = useTranslation(["views/faceLibrary"]);
return ( return (
<div className="relative flex flex-col rounded-lg"> <div
className={cn(
"flex cursor-pointer flex-col gap-2 rounded-lg bg-card outline outline-[3px]",
selected
? "shadow-selected outline-selected"
: "outline-transparent duration-500",
)}
onClick={(e) => {
e.stopPropagation();
onClickFaces([image], e.ctrlKey || e.metaKey);
}}
>
<div <div
className={cn( className={cn(
"w-full overflow-hidden rounded-t-lg *:text-card-foreground", "w-full overflow-hidden p-2 *:text-card-foreground",
isMobile && "flex justify-center", isMobile && "flex justify-center",
)} )}
> >
<img className="h-40" src={`${baseUrl}clips/faces/${name}/${image}`} /> <img
className="h-40 rounded-lg"
src={`${baseUrl}clips/faces/${name}/${image}`}
/>
</div> </div>
<div className="rounded-b-lg bg-card p-2"> <div className="rounded-b-lg bg-card p-3">
<div className="flex w-full flex-row items-center justify-between gap-2"> <div className="flex w-full flex-row items-center justify-between gap-2">
<div className="flex flex-col items-start text-xs text-primary-variant"> <div className="flex flex-col items-start text-xs text-primary-variant">
<div className="smart-capitalize">{name}</div> <div className="smart-capitalize">{name}</div>
@ -1059,7 +1169,10 @@ function FaceImage({ name, image, onDelete }: FaceImageProps) {
<TooltipTrigger> <TooltipTrigger>
<LuTrash2 <LuTrash2
className="size-5 cursor-pointer text-primary-variant hover:text-primary" className="size-5 cursor-pointer text-primary-variant hover:text-primary"
onClick={() => onDelete(name, [image])} onClick={(e) => {
e.stopPropagation();
onDelete(name, [image]);
}}
/> />
</TooltipTrigger> </TooltipTrigger>
<TooltipContent>{t("button.deleteFaceAttempts")}</TooltipContent> <TooltipContent>{t("button.deleteFaceAttempts")}</TooltipContent>

View File

@ -385,6 +385,55 @@ export function RecordingView({
// eslint-disable-next-line react-hooks/exhaustive-deps // eslint-disable-next-line react-hooks/exhaustive-deps
}, [previewRowRef.current?.scrollWidth, previewRowRef.current?.scrollHeight]); }, [previewRowRef.current?.scrollWidth, previewRowRef.current?.scrollHeight]);
// visibility listener for lazy loading
const [visiblePreviews, setVisiblePreviews] = useState<string[]>([]);
const visiblePreviewObserver = useRef<IntersectionObserver | null>(null);
useEffect(() => {
const visibleCameras = new Set<string>();
visiblePreviewObserver.current = new IntersectionObserver(
(entries) => {
entries.forEach((entry) => {
const camera = (entry.target as HTMLElement).dataset.camera;
if (!camera) {
return;
}
if (entry.isIntersecting) {
visibleCameras.add(camera);
} else {
visibleCameras.delete(camera);
}
setVisiblePreviews([...visibleCameras]);
});
},
{ threshold: 0.1 },
);
return () => {
visiblePreviewObserver.current?.disconnect();
};
}, []);
const previewRef = useCallback(
(node: HTMLElement | null) => {
if (!visiblePreviewObserver.current) {
return;
}
try {
if (node) visiblePreviewObserver.current.observe(node);
} catch (e) {
// no op
}
},
// we need to listen on the value of the ref
// eslint-disable-next-line react-hooks/exhaustive-deps
[visiblePreviewObserver.current],
);
return ( return (
<div ref={contentRef} className="flex size-full flex-col pt-2"> <div ref={contentRef} className="flex size-full flex-col pt-2">
<Toaster closeButton={true} /> <Toaster closeButton={true} />
@ -631,12 +680,14 @@ export function RecordingView({
}} }}
> >
<PreviewPlayer <PreviewPlayer
previewRef={previewRef}
className="size-full" className="size-full"
camera={cam} camera={cam}
timeRange={currentTimeRange} timeRange={currentTimeRange}
cameraPreviews={allPreviews ?? []} cameraPreviews={allPreviews ?? []}
startTime={startTime} startTime={startTime}
isScrubbing={scrubbing} isScrubbing={scrubbing}
isVisible={visiblePreviews.includes(cam)}
onControllerReady={(controller) => { onControllerReady={(controller) => {
previewRefs.current[cam] = controller; previewRefs.current[cam] = controller;
controller.scrubToTimestamp(startTime); controller.scrubToTimestamp(startTime);

View File

@ -230,7 +230,9 @@ export default function CameraSettingsView({
if (changedValue) { if (changedValue) {
addMessage( addMessage(
"camera_settings", "camera_settings",
`Unsaved review classification settings for ${capitalizeFirstLetter(selectedCamera)}`, t("camera.reviewClassification.unsavedChanges", {
camera: selectedCamera,
}),
undefined, undefined,
`review_classification_settings_${selectedCamera}`, `review_classification_settings_${selectedCamera}`,
); );

View File

@ -220,7 +220,7 @@ export default function ClassificationSettingsView({
if (changedValue) { if (changedValue) {
addMessage( addMessage(
"search_settings", "search_settings",
`Unsaved Classification settings changes`, t("classification.unsavedChanges"),
undefined, undefined,
"search_settings", "search_settings",
); );

View File

@ -176,7 +176,7 @@ export default function FrigatePlusSettingsView({
if (changedValue) { if (changedValue) {
addMessage( addMessage(
"plus_settings", "plus_settings",
`Unsaved Frigate+ settings changes`, t("frigatePlus.unsavedChanges"),
undefined, undefined,
"plus_settings", "plus_settings",
); );

View File

@ -167,7 +167,7 @@ export default function MotionTunerView({
if (changedValue) { if (changedValue) {
addMessage( addMessage(
"motion_tuner", "motion_tuner",
`Unsaved motion tuner changes (${selectedCamera})`, t("motionDetectionTuner.unsavedChanges", { camera: selectedCamera }),
undefined, undefined,
`motion_tuner_${selectedCamera}`, `motion_tuner_${selectedCamera}`,
); );

View File

@ -105,7 +105,7 @@ export default function NotificationView({
if (changedValue) { if (changedValue) {
addMessage( addMessage(
"notification_settings", "notification_settings",
`Unsaved notification settings`, t("notification.unsavedChanges"),
undefined, undefined,
`notification_settings`, `notification_settings`,
); );
@ -128,7 +128,7 @@ export default function NotificationView({
if (registration) { if (registration) {
addMessage( addMessage(
"notification_settings", "notification_settings",
"Unsaved Notification Registrations", t("notification.unsavedRegistrations"),
undefined, undefined,
"registration", "registration",
); );