mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-06-04 01:16:52 +02:00
Fixes (#18139)
* Catch error and show toast when failing to delete review items * i18n keys * add link to speed estimation docs in zone edit pane * Implement reset of tracked object update for each camera * Cleanup * register mqtt callbacks for toggling alerts and detections * clarify snapshots docs * clarify semantic search reindexing * add ukrainian * adjust date granularity for last recording time The api endpoint only returns granularity down to the day * Add amd hardware * fix crash in face library on initial start after enabling * Fix recordings view for mobile landscape The events view incorrectly was displaying two columns on landscape view and it only took up 20% of the screen width. Additionally, in landscape view the timeline was too wide (especially on iPads of various screen sizes) and would overlap the main video * face rec overfitting instructions * Clarify * face docs * clarify * clarify --------- Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
This commit is contained in:
parent
8094dd4075
commit
f39ddbc00d
@ -137,6 +137,15 @@ This can happen for a few different reasons, but this is usually an indicator th
|
|||||||
- When you provide images with different poses, lighting, and expressions, the algorithm extracts features that are consistent across those variations.
|
- When you provide images with different poses, lighting, and expressions, the algorithm extracts features that are consistent across those variations.
|
||||||
- By training on a diverse set of images, the algorithm becomes less sensitive to minor variations and noise in the input image.
|
- By training on a diverse set of images, the algorithm becomes less sensitive to minor variations and noise in the input image.
|
||||||
|
|
||||||
|
Review your face collections and remove most of the unclear or low-quality images. Then, use the **Reprocess** button on each face in the **Train** tab to evaluate how the changes affect recognition scores.
|
||||||
|
|
||||||
|
Avoid training on images that already score highly, as this can lead to over-fitting. Instead, focus on relatively clear images that score lower - ideally with different lighting, angles, and conditions—to help the model generalize more effectively.
|
||||||
|
|
||||||
|
### Frigate misidentified a face. Can I tell it that a face is "not" a specific person?
|
||||||
|
|
||||||
|
No, face recognition does not support negative training (i.e., explicitly telling it who someone is _not_). Instead, the best approach is to improve the training data by using a more diverse and representative set of images for each person.
|
||||||
|
For more guidance, refer to the section above on improving recognition accuracy.
|
||||||
|
|
||||||
### I see scores above the threshold in the train tab, but a sub label wasn't assigned?
|
### I see scores above the threshold in the train tab, but a sub label wasn't assigned?
|
||||||
|
|
||||||
The Frigate considers the recognition scores across all recognition attempts for each person object. The scores are continually weighted based on the area of the face, and a sub label will only be assigned to person if a person is confidently recognized consistently. This avoids cases where a single high confidence recognition would throw off the results.
|
The Frigate considers the recognition scores across all recognition attempts for each person object. The scores are continually weighted based on the area of the face, and a sub label will only be assigned to person if a person is confidently recognized consistently. This avoids cases where a single high confidence recognition would throw off the results.
|
||||||
|
@ -19,7 +19,7 @@ For best performance, 16GB or more of RAM and a dedicated GPU are recommended.
|
|||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
Semantic Search is disabled by default, and must be enabled in your config file or in the UI's Settings page before it can be used. Semantic Search is a global configuration setting.
|
Semantic Search is disabled by default, and must be enabled in your config file or in the UI's Classification Settings page before it can be used. Semantic Search is a global configuration setting.
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
semantic_search:
|
semantic_search:
|
||||||
@ -29,9 +29,9 @@ semantic_search:
|
|||||||
|
|
||||||
:::tip
|
:::tip
|
||||||
|
|
||||||
The embeddings database can be re-indexed from the existing tracked objects in your database by adding `reindex: True` to your `semantic_search` configuration or by toggling the switch on the Search Settings page in the UI and restarting Frigate. Depending on the number of tracked objects you have, it can take a long while to complete and may max out your CPU while indexing. Make sure to turn the UI's switch off or set the config back to `False` before restarting Frigate again.
|
The embeddings database can be re-indexed from the existing tracked objects in your database by pressing the "Reindex" button in the Classification Settings in the UI or by adding `reindex: True` to your `semantic_search` configuration and restarting Frigate. Depending on the number of tracked objects you have, it can take a long while to complete and may max out your CPU while indexing.
|
||||||
|
|
||||||
If you are enabling Semantic Search for the first time, be advised that Frigate does not automatically index older tracked objects. You will need to enable the `reindex` feature in order to do that.
|
If you are enabling Semantic Search for the first time, be advised that Frigate does not automatically index older tracked objects. You will need to reindex as described above.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
@ -72,7 +72,7 @@ For most users, especially native English speakers, the V1 model remains the rec
|
|||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
|
||||||
Switching between V1 and V2 requires reindexing your embeddings. To do this, set `reindex: True` in your Semantic Search configuration and restart Frigate. The embeddings from V1 and V2 are incompatible, and failing to reindex will result in incorrect search results.
|
Switching between V1 and V2 requires reindexing your embeddings. The embeddings from V1 and V2 are incompatible, and failing to reindex will result in incorrect search results.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
@ -5,7 +5,7 @@ title: Snapshots
|
|||||||
|
|
||||||
Frigate can save a snapshot image to `/media/frigate/clips` for each object that is detected named as `<camera>-<id>.jpg`. They are also accessible [via the api](../integrations/api/event-snapshot-events-event-id-snapshot-jpg-get.api.mdx)
|
Frigate can save a snapshot image to `/media/frigate/clips` for each object that is detected named as `<camera>-<id>.jpg`. They are also accessible [via the api](../integrations/api/event-snapshot-events-event-id-snapshot-jpg-get.api.mdx)
|
||||||
|
|
||||||
For users with Frigate+ enabled, snapshots are accessible in the UI in the Frigate+ pane to allow for quick submission to the Frigate+ service.
|
Snapshots are accessible in the UI in the Explore pane. This allows for quick submission to the Frigate+ service.
|
||||||
|
|
||||||
To only save snapshots for objects that enter a specific zone, [see the zone docs](./zones.md#restricting-snapshots-to-specific-zones)
|
To only save snapshots for objects that enter a specific zone, [see the zone docs](./zones.md#restricting-snapshots-to-specific-zones)
|
||||||
|
|
||||||
|
@ -143,9 +143,10 @@ Inference speeds will vary greatly depending on the GPU and the model used.
|
|||||||
|
|
||||||
With the [rocm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many discrete AMD GPUs.
|
With the [rocm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many discrete AMD GPUs.
|
||||||
|
|
||||||
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time |
|
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time |
|
||||||
| -------- | --------------------- | ------------------------- |
|
| --------- | --------------------- | ------------------------- |
|
||||||
| AMD 780M | ~ 14 ms | 320: ~ 30 ms 640: ~ 60 ms |
|
| AMD 780M | ~ 14 ms | 320: ~ 30 ms 640: ~ 60 ms |
|
||||||
|
| AMD 8700G | | 320: ~ 20 ms 640: ~ 40 ms |
|
||||||
|
|
||||||
## Community Supported Detectors
|
## Community Supported Detectors
|
||||||
|
|
||||||
|
@ -213,6 +213,8 @@ class MqttClient(Communicator): # type: ignore[misc]
|
|||||||
"motion_contour_area",
|
"motion_contour_area",
|
||||||
"birdseye",
|
"birdseye",
|
||||||
"birdseye_mode",
|
"birdseye_mode",
|
||||||
|
"review_alerts",
|
||||||
|
"review_detections",
|
||||||
]
|
]
|
||||||
|
|
||||||
for name in self.config.cameras.keys():
|
for name in self.config.cameras.keys():
|
||||||
|
@ -1552,6 +1552,12 @@ class LicensePlateProcessingMixin:
|
|||||||
(base64.b64encode(encoded_img).decode("ASCII"), id, camera),
|
(base64.b64encode(encoded_img).decode("ASCII"), id, camera),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if id not in self.detected_license_plates:
|
||||||
|
if camera not in self.camera_current_cars:
|
||||||
|
self.camera_current_cars[camera] = []
|
||||||
|
|
||||||
|
self.camera_current_cars[camera].append(id)
|
||||||
|
|
||||||
self.detected_license_plates[id] = {
|
self.detected_license_plates[id] = {
|
||||||
"plate": top_plate,
|
"plate": top_plate,
|
||||||
"char_confidences": top_char_confidences,
|
"char_confidences": top_char_confidences,
|
||||||
@ -1564,7 +1570,7 @@ class LicensePlateProcessingMixin:
|
|||||||
def handle_request(self, topic, request_data) -> dict[str, any] | None:
|
def handle_request(self, topic, request_data) -> dict[str, any] | None:
|
||||||
return
|
return
|
||||||
|
|
||||||
def expire_object(self, object_id: str):
|
def expire_object(self, object_id: str, camera: str):
|
||||||
if object_id in self.detected_license_plates:
|
if object_id in self.detected_license_plates:
|
||||||
self.detected_license_plates.pop(object_id)
|
self.detected_license_plates.pop(object_id)
|
||||||
|
|
||||||
|
@ -50,10 +50,11 @@ class RealTimeProcessorApi(ABC):
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def expire_object(self, object_id: str) -> None:
|
def expire_object(self, object_id: str, camera: str) -> None:
|
||||||
"""Handle objects that are no longer detected.
|
"""Handle objects that are no longer detected.
|
||||||
Args:
|
Args:
|
||||||
object_id (str): id of object that is no longer detected.
|
object_id (str): id of object that is no longer detected.
|
||||||
|
camera (str): name of camera that object was detected on.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
None.
|
None.
|
||||||
|
@ -152,6 +152,6 @@ class BirdRealTimeProcessor(RealTimeProcessorApi):
|
|||||||
def handle_request(self, topic, request_data):
|
def handle_request(self, topic, request_data):
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def expire_object(self, object_id):
|
def expire_object(self, object_id, camera):
|
||||||
if object_id in self.detected_birds:
|
if object_id in self.detected_birds:
|
||||||
self.detected_birds.pop(object_id)
|
self.detected_birds.pop(object_id)
|
||||||
|
@ -54,6 +54,7 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
|
|||||||
self.face_detector: cv2.FaceDetectorYN = None
|
self.face_detector: cv2.FaceDetectorYN = None
|
||||||
self.requires_face_detection = "face" not in self.config.objects.all_objects
|
self.requires_face_detection = "face" not in self.config.objects.all_objects
|
||||||
self.person_face_history: dict[str, list[tuple[str, float, int]]] = {}
|
self.person_face_history: dict[str, list[tuple[str, float, int]]] = {}
|
||||||
|
self.camera_current_people: dict[str, list[str]] = {}
|
||||||
self.recognizer: FaceRecognizer | None = None
|
self.recognizer: FaceRecognizer | None = None
|
||||||
self.faces_per_second = EventsPerSecond()
|
self.faces_per_second = EventsPerSecond()
|
||||||
self.inference_speed = InferenceSpeed(self.metrics.face_rec_speed)
|
self.inference_speed = InferenceSpeed(self.metrics.face_rec_speed)
|
||||||
@ -282,9 +283,13 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
|
|||||||
if id not in self.person_face_history:
|
if id not in self.person_face_history:
|
||||||
self.person_face_history[id] = []
|
self.person_face_history[id] = []
|
||||||
|
|
||||||
|
if camera not in self.camera_current_people:
|
||||||
|
self.camera_current_people[camera] = []
|
||||||
|
|
||||||
self.person_face_history[id].append(
|
self.person_face_history[id].append(
|
||||||
(sub_label, score, face_frame.shape[0] * face_frame.shape[1])
|
(sub_label, score, face_frame.shape[0] * face_frame.shape[1])
|
||||||
)
|
)
|
||||||
|
self.camera_current_people[camera].append(id)
|
||||||
(weighted_sub_label, weighted_score) = self.weighted_average(
|
(weighted_sub_label, weighted_score) = self.weighted_average(
|
||||||
self.person_face_history[id]
|
self.person_face_history[id]
|
||||||
)
|
)
|
||||||
@ -420,10 +425,25 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
|
|||||||
)
|
)
|
||||||
shutil.move(current_file, new_file)
|
shutil.move(current_file, new_file)
|
||||||
|
|
||||||
def expire_object(self, object_id: str):
|
def expire_object(self, object_id: str, camera: str):
|
||||||
if object_id in self.person_face_history:
|
if object_id in self.person_face_history:
|
||||||
self.person_face_history.pop(object_id)
|
self.person_face_history.pop(object_id)
|
||||||
|
|
||||||
|
if object_id in self.camera_current_people.get(camera, []):
|
||||||
|
self.camera_current_people[camera].remove(object_id)
|
||||||
|
|
||||||
|
if len(self.camera_current_people[camera]) == 0:
|
||||||
|
self.requestor.send_data(
|
||||||
|
"tracked_object_update",
|
||||||
|
json.dumps(
|
||||||
|
{
|
||||||
|
"type": TrackedObjectUpdateTypesEnum.face,
|
||||||
|
"name": None,
|
||||||
|
"camera": camera,
|
||||||
|
}
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
def weighted_average(
|
def weighted_average(
|
||||||
self, results_list: list[tuple[str, float, int]], max_weight: int = 4000
|
self, results_list: list[tuple[str, float, int]], max_weight: int = 4000
|
||||||
):
|
):
|
||||||
|
@ -1,5 +1,6 @@
|
|||||||
"""Handle processing images for face detection and recognition."""
|
"""Handle processing images for face detection and recognition."""
|
||||||
|
|
||||||
|
import json
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
@ -13,6 +14,7 @@ from frigate.data_processing.common.license_plate.mixin import (
|
|||||||
from frigate.data_processing.common.license_plate.model import (
|
from frigate.data_processing.common.license_plate.model import (
|
||||||
LicensePlateModelRunner,
|
LicensePlateModelRunner,
|
||||||
)
|
)
|
||||||
|
from frigate.types import TrackedObjectUpdateTypesEnum
|
||||||
|
|
||||||
from ..types import DataProcessorMetrics
|
from ..types import DataProcessorMetrics
|
||||||
from .api import RealTimeProcessorApi
|
from .api import RealTimeProcessorApi
|
||||||
@ -36,6 +38,7 @@ class LicensePlateRealTimeProcessor(LicensePlateProcessingMixin, RealTimeProcess
|
|||||||
self.lpr_config = config.lpr
|
self.lpr_config = config.lpr
|
||||||
self.config = config
|
self.config = config
|
||||||
self.sub_label_publisher = sub_label_publisher
|
self.sub_label_publisher = sub_label_publisher
|
||||||
|
self.camera_current_cars: dict[str, list[str]] = {}
|
||||||
super().__init__(config, metrics)
|
super().__init__(config, metrics)
|
||||||
|
|
||||||
def process_frame(
|
def process_frame(
|
||||||
@ -50,6 +53,22 @@ class LicensePlateRealTimeProcessor(LicensePlateProcessingMixin, RealTimeProcess
|
|||||||
def handle_request(self, topic, request_data) -> dict[str, any] | None:
|
def handle_request(self, topic, request_data) -> dict[str, any] | None:
|
||||||
return
|
return
|
||||||
|
|
||||||
def expire_object(self, object_id: str):
|
def expire_object(self, object_id: str, camera: str):
|
||||||
if object_id in self.detected_license_plates:
|
if object_id in self.detected_license_plates:
|
||||||
self.detected_license_plates.pop(object_id)
|
self.detected_license_plates.pop(object_id)
|
||||||
|
|
||||||
|
if object_id in self.camera_current_cars.get(camera, []):
|
||||||
|
self.camera_current_cars[camera].remove(object_id)
|
||||||
|
|
||||||
|
if len(self.camera_current_cars[camera]) == 0:
|
||||||
|
self.requestor.send_data(
|
||||||
|
"tracked_object_update",
|
||||||
|
json.dumps(
|
||||||
|
{
|
||||||
|
"type": TrackedObjectUpdateTypesEnum.lpr,
|
||||||
|
"name": None,
|
||||||
|
"plate": None,
|
||||||
|
"camera": camera,
|
||||||
|
}
|
||||||
|
),
|
||||||
|
)
|
||||||
|
@ -359,7 +359,7 @@ class EmbeddingMaintainer(threading.Thread):
|
|||||||
|
|
||||||
# expire in realtime processors
|
# expire in realtime processors
|
||||||
for processor in self.realtime_processors:
|
for processor in self.realtime_processors:
|
||||||
processor.expire_object(event_id)
|
processor.expire_object(event_id, camera)
|
||||||
|
|
||||||
if updated_db:
|
if updated_db:
|
||||||
try:
|
try:
|
||||||
|
@ -60,6 +60,10 @@
|
|||||||
"12hour": "MMM d, h:mm aaa",
|
"12hour": "MMM d, h:mm aaa",
|
||||||
"24hour": "MMM d, HH:mm"
|
"24hour": "MMM d, HH:mm"
|
||||||
},
|
},
|
||||||
|
"formattedTimestampMonthDayYear": {
|
||||||
|
"12hour": "MMM d, yyyy",
|
||||||
|
"24hour": "MMM d, yyyy"
|
||||||
|
},
|
||||||
"formattedTimestampMonthDayYearHourMinute": {
|
"formattedTimestampMonthDayYearHourMinute": {
|
||||||
"12hour": "MMM d yyyy, h:mm aaa",
|
"12hour": "MMM d yyyy, h:mm aaa",
|
||||||
"24hour": "MMM d yyyy, HH:mm"
|
"24hour": "MMM d yyyy, HH:mm"
|
||||||
|
@ -98,6 +98,10 @@
|
|||||||
"title": "Confirm Delete",
|
"title": "Confirm Delete",
|
||||||
"desc": {
|
"desc": {
|
||||||
"selected": "Are you sure you want to delete all recorded video associated with this review item?<br /><br />Hold the <em>Shift</em> key to bypass this dialog in the future."
|
"selected": "Are you sure you want to delete all recorded video associated with this review item?<br /><br />Hold the <em>Shift</em> key to bypass this dialog in the future."
|
||||||
|
},
|
||||||
|
"toast": {
|
||||||
|
"success": "Video footage associated with the selected review items has been deleted successfully.",
|
||||||
|
"error": "Failed to delete: {{error}}"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"button": {
|
"button": {
|
||||||
|
@ -68,6 +68,7 @@
|
|||||||
"dropInstructions": "Drag and drop an image here, or click to select",
|
"dropInstructions": "Drag and drop an image here, or click to select",
|
||||||
"maxSize": "Max size: {{size}}MB"
|
"maxSize": "Max size: {{size}}MB"
|
||||||
},
|
},
|
||||||
|
"nofaces": "No faces available",
|
||||||
"readTheDocs": "Read the documentation",
|
"readTheDocs": "Read the documentation",
|
||||||
"trainFaceAs": "Train Face as:",
|
"trainFaceAs": "Train Face as:",
|
||||||
"trainFace": "Train Face",
|
"trainFace": "Train Face",
|
||||||
|
@ -268,7 +268,8 @@
|
|||||||
"allObjects": "All Objects",
|
"allObjects": "All Objects",
|
||||||
"speedEstimation": {
|
"speedEstimation": {
|
||||||
"title": "Speed Estimation",
|
"title": "Speed Estimation",
|
||||||
"desc": "Enable speed estimation for objects in this zone. The zone must have exactly 4 points."
|
"desc": "Enable speed estimation for objects in this zone. The zone must have exactly 4 points.",
|
||||||
|
"docs": "Read the documentation"
|
||||||
},
|
},
|
||||||
"speedThreshold": {
|
"speedThreshold": {
|
||||||
"title": "Speed Threshold ({{unit}})",
|
"title": "Speed Threshold ({{unit}})",
|
||||||
|
@ -17,6 +17,7 @@ import {
|
|||||||
} from "../ui/alert-dialog";
|
} from "../ui/alert-dialog";
|
||||||
import useKeyboardListener from "@/hooks/use-keyboard-listener";
|
import useKeyboardListener from "@/hooks/use-keyboard-listener";
|
||||||
import { Trans, useTranslation } from "react-i18next";
|
import { Trans, useTranslation } from "react-i18next";
|
||||||
|
import { toast } from "sonner";
|
||||||
|
|
||||||
type ReviewActionGroupProps = {
|
type ReviewActionGroupProps = {
|
||||||
selectedReviews: string[];
|
selectedReviews: string[];
|
||||||
@ -41,11 +42,33 @@ export default function ReviewActionGroup({
|
|||||||
pullLatestData();
|
pullLatestData();
|
||||||
}, [selectedReviews, setSelectedReviews, pullLatestData]);
|
}, [selectedReviews, setSelectedReviews, pullLatestData]);
|
||||||
|
|
||||||
const onDelete = useCallback(async () => {
|
const onDelete = useCallback(() => {
|
||||||
await axios.post(`reviews/delete`, { ids: selectedReviews });
|
axios
|
||||||
setSelectedReviews([]);
|
.post(`reviews/delete`, { ids: selectedReviews })
|
||||||
pullLatestData();
|
.then((resp) => {
|
||||||
}, [selectedReviews, setSelectedReviews, pullLatestData]);
|
if (resp.status === 200) {
|
||||||
|
toast.success(t("recording.confirmDelete.toast.success"), {
|
||||||
|
position: "top-center",
|
||||||
|
});
|
||||||
|
setSelectedReviews([]);
|
||||||
|
pullLatestData();
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.catch((error) => {
|
||||||
|
const errorMessage =
|
||||||
|
error.response?.data?.message ||
|
||||||
|
error.response?.data?.detail ||
|
||||||
|
"Unknown error";
|
||||||
|
toast.error(
|
||||||
|
t("recording.confirmDelete.toast.error", {
|
||||||
|
error: errorMessage,
|
||||||
|
}),
|
||||||
|
{
|
||||||
|
position: "top-center",
|
||||||
|
},
|
||||||
|
);
|
||||||
|
});
|
||||||
|
}, [selectedReviews, setSelectedReviews, pullLatestData, t]);
|
||||||
|
|
||||||
const [deleteDialogOpen, setDeleteDialogOpen] = useState(false);
|
const [deleteDialogOpen, setDeleteDialogOpen] = useState(false);
|
||||||
const [bypassDialog, setBypassDialog] = useState(false);
|
const [bypassDialog, setBypassDialog] = useState(false);
|
||||||
|
@ -30,6 +30,8 @@ import { flattenPoints, interpolatePoints } from "@/utils/canvasUtil";
|
|||||||
import ActivityIndicator from "../indicators/activity-indicator";
|
import ActivityIndicator from "../indicators/activity-indicator";
|
||||||
import { getAttributeLabels } from "@/utils/iconUtil";
|
import { getAttributeLabels } from "@/utils/iconUtil";
|
||||||
import { Trans, useTranslation } from "react-i18next";
|
import { Trans, useTranslation } from "react-i18next";
|
||||||
|
import { Link } from "react-router-dom";
|
||||||
|
import { LuExternalLink } from "react-icons/lu";
|
||||||
|
|
||||||
type ZoneEditPaneProps = {
|
type ZoneEditPaneProps = {
|
||||||
polygons?: Polygon[];
|
polygons?: Polygon[];
|
||||||
@ -669,6 +671,17 @@ export default function ZoneEditPane({
|
|||||||
</div>
|
</div>
|
||||||
<FormDescription>
|
<FormDescription>
|
||||||
{t("masksAndZones.zones.speedEstimation.desc")}
|
{t("masksAndZones.zones.speedEstimation.desc")}
|
||||||
|
<div className="mt-2 flex items-center text-primary">
|
||||||
|
<Link
|
||||||
|
to="https://docs.frigate.video/configuration/zones#speed-estimation"
|
||||||
|
target="_blank"
|
||||||
|
rel="noopener noreferrer"
|
||||||
|
className="inline"
|
||||||
|
>
|
||||||
|
{t("masksAndZones.zones.speedEstimation.docs")}
|
||||||
|
<LuExternalLink className="ml-2 inline-flex size-3" />
|
||||||
|
</Link>
|
||||||
|
</div>
|
||||||
</FormDescription>
|
</FormDescription>
|
||||||
<FormMessage />
|
<FormMessage />
|
||||||
</FormItem>
|
</FormItem>
|
||||||
|
@ -11,4 +11,5 @@ export const supportedLanguageKeys = [
|
|||||||
"zh-CN",
|
"zh-CN",
|
||||||
"yue-Hant",
|
"yue-Hant",
|
||||||
"ru",
|
"ru",
|
||||||
|
"uk",
|
||||||
];
|
];
|
||||||
|
@ -390,10 +390,10 @@ export default function FaceLibrary() {
|
|||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
</div>
|
</div>
|
||||||
{pageToggle && faceImages.length === 0 && pageToggle !== "train" ? (
|
{pageToggle && faceImages?.length === 0 && pageToggle !== "train" ? (
|
||||||
<div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center text-center">
|
<div className="absolute left-1/2 top-1/2 flex -translate-x-1/2 -translate-y-1/2 flex-col items-center justify-center text-center">
|
||||||
<LuFolderCheck className="size-16" />
|
<LuFolderCheck className="size-16" />
|
||||||
No faces available
|
{t("nofaces")}
|
||||||
</div>
|
</div>
|
||||||
) : (
|
) : (
|
||||||
pageToggle &&
|
pageToggle &&
|
||||||
|
@ -819,7 +819,7 @@ function Timeline({
|
|||||||
className={`${
|
className={`${
|
||||||
isDesktop
|
isDesktop
|
||||||
? `${timelineType == "timeline" ? "w-[100px]" : "w-60"} no-scrollbar overflow-y-auto`
|
? `${timelineType == "timeline" ? "w-[100px]" : "w-60"} no-scrollbar overflow-y-auto`
|
||||||
: "overflow-hidden portrait:flex-grow landscape:w-[20%]"
|
: `overflow-hidden portrait:flex-grow ${timelineType == "timeline" ? "landscape:w-[100px]" : "landscape:w-[175px]"} `
|
||||||
} relative`}
|
} relative`}
|
||||||
>
|
>
|
||||||
<div className="pointer-events-none absolute inset-x-0 top-0 z-20 h-[30px] w-full bg-gradient-to-b from-secondary to-transparent"></div>
|
<div className="pointer-events-none absolute inset-x-0 top-0 z-20 h-[30px] w-full bg-gradient-to-b from-secondary to-transparent"></div>
|
||||||
@ -855,7 +855,7 @@ function Timeline({
|
|||||||
<div
|
<div
|
||||||
className={cn(
|
className={cn(
|
||||||
"scrollbar-container grid h-auto grid-cols-1 gap-4 overflow-auto p-4",
|
"scrollbar-container grid h-auto grid-cols-1 gap-4 overflow-auto p-4",
|
||||||
isMobile && "sm:grid-cols-2",
|
isMobile && "sm:portrait:grid-cols-2",
|
||||||
)}
|
)}
|
||||||
>
|
>
|
||||||
{mainCameraReviewItems.length === 0 ? (
|
{mainCameraReviewItems.length === 0 ? (
|
||||||
|
@ -71,7 +71,7 @@ export default function StorageMetrics({
|
|||||||
|
|
||||||
const timeFormat = config?.ui.time_format === "24hour" ? "24hour" : "12hour";
|
const timeFormat = config?.ui.time_format === "24hour" ? "24hour" : "12hour";
|
||||||
const format = useMemo(() => {
|
const format = useMemo(() => {
|
||||||
return t(`time.formattedTimestampMonthDayYearHourMinute.${timeFormat}`, {
|
return t(`time.formattedTimestampMonthDayYear.${timeFormat}`, {
|
||||||
ns: "common",
|
ns: "common",
|
||||||
});
|
});
|
||||||
}, [t, timeFormat]);
|
}, [t, timeFormat]);
|
||||||
|
Loading…
Reference in New Issue
Block a user