* Add option to not trim clip

* Improve API

* Update snapshot for new best objects

* Fix missing strings

* Convert to separate key

* Always include bounding box on snapshots

* improve autotracking relative zooming time calculation

* update proxy docs to note the need for comma separated header roles

* Add count translation

* tracked object lifecycle i18n fix

* update speed estimation docs

* clarity

* Re-initialize onvif information when toggling camera on live view

* Move time ago to card info and add face area

* Clarify face recognition docs

* Increase minimum face recognition area

* use clipFrom to in vod module endpoint to start at the correct time

* Cleanup media api

* Don't change duration

* Use search detail dialog for face library

* Move to segment based

* Cleanup

* Add back duration modification

* clean up docs

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
This commit is contained in:
Nicolas Mowen 2025-05-14 16:44:06 -06:00 committed by GitHub
parent 1fa7ce5486
commit d3d05fa397
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
17 changed files with 121 additions and 121 deletions

View File

@ -97,7 +97,7 @@ python3 -c 'import secrets; print(secrets.token_hex(64))'
### Header mapping ### Header mapping
If you have disabled Frigate's authentication and your proxy supports passing a header with authenticated usernames and/or roles, you can use the `header_map` config to specify the header name so it is passed to Frigate. For example, the following will map the `X-Forwarded-User` and `X-Forwarded-Role` values. Header names are not case sensitive. If you have disabled Frigate's authentication and your proxy supports passing a header with authenticated usernames and/or roles, you can use the `header_map` config to specify the header name so it is passed to Frigate. For example, the following will map the `X-Forwarded-User` and `X-Forwarded-Role` values. Header names are not case sensitive. Multiple values can be included in the role header, but they must be comma-separated.
```yaml ```yaml
proxy: proxy:

View File

@ -105,19 +105,25 @@ When choosing images to include in the face training set it is recommended to al
::: :::
### Understanding the Train Tab
The Train tab in the face library displays recent face recognition attempts. Detected face images are grouped according to the person they were identified as potentially matching.
Each face image is labeled with a name (or `Unknown`) along with the confidence score of the recognition attempt. While each image can be used to train the system for a specific person, not all images are suitable for training.
Refer to the guidelines below for best practices on selecting images for training.
### Step 1 - Building a Strong Foundation ### Step 1 - Building a Strong Foundation
When first enabling face recognition it is important to build a foundation of strong images. It is recommended to start by uploading 1-5 photos containing just this person's face. It is important that the person's face in the photo is front-facing and not turned, this will ensure a good starting point. When first enabling face recognition it is important to build a foundation of strong images. It is recommended to start by uploading 1-5 photos containing just this person's face. It is important that the person's face in the photo is front-facing and not turned, this will ensure a good starting point.
Then it is recommended to use the `Face Library` tab in Frigate to select and train images for each person as they are detected. When building a strong foundation it is strongly recommended to only train on images that are front-facing. Ignore images from cameras that recognize faces from an angle. Then it is recommended to use the `Face Library` tab in Frigate to select and train images for each person as they are detected. When building a strong foundation it is strongly recommended to only train on images that are front-facing. Ignore images from cameras that recognize faces from an angle. Aim to strike a balance between the quality of images while also having a range of conditions (day / night, different weather conditions, different times of day, etc.) in order to have diversity in the images used for each person and not have over-fitting.
Aim to strike a balance between the quality of images while also having a range of conditions (day / night, different weather conditions, different times of day, etc.) in order to have diversity in the images used for each person and not have over-fitting. You do not want to train images that are 90%+ as these are already being confidently recognized. In this step the goal is to train on clear, lower scoring front-facing images until the majority of front-facing images for a given person are consistently recognized correctly. Then it is time to move on to step 2.
Once a person starts to be consistently recognized correctly on images that are front-facing, it is time to move on to the next step.
### Step 2 - Expanding The Dataset ### Step 2 - Expanding The Dataset
Once front-facing images are performing well, start choosing slightly off-angle images to include for training. It is important to still choose images where enough face detail is visible to recognize someone. Once front-facing images are performing well, start choosing slightly off-angle images to include for training. It is important to still choose images where enough face detail is visible to recognize someone, and you still only want to train on images that score lower.
## FAQ ## FAQ

View File

@ -165,7 +165,7 @@ These speed values are output as a number in miles per hour (mph) or kilometers
#### Best practices and caveats #### Best practices and caveats
- Speed estimation works best with a straight road or path when your object travels in a straight line across that path. Avoid creating your zone near intersections or anywhere that objects would make a turn. If the bounding box changes shape (either because the object made a turn or became partially obscured, for example), speed estimation will not be accurate. - Speed estimation works best with a straight road or path when your object travels in a straight line across that path. Avoid creating your zone near intersections or anywhere that objects would make a turn. A large zone can be used, but it's not required and may even cause issues - if the object's bounding box changes shape (such as when it turns or becomes partially hidden), the speed estimate will be inaccurate. Generally it's best to make your zone large enough to capture a few frames, but small enough so that the bounding box doesn't change size as it travels through the zone.
- Create a zone where the bottom center of your object's bounding box travels directly through it and does not become obscured at any time. See the photo example above. - Create a zone where the bottom center of your object's bounding box travels directly through it and does not become obscured at any time. See the photo example above.
- Depending on the size and location of your zone, you may want to decrease the zone's `inertia` value from the default of 3. - Depending on the size and location of your zone, you may want to decrease the zone's `inertia` value from the default of 3.
- The more accurate your real-world dimensions can be measured, the more accurate speed estimation will be. However, due to the way Frigate's tracking algorithm works, you may need to tweak the real-world distance values so that estimated speeds better match real-world speeds. - The more accurate your real-world dimensions can be measured, the more accurate speed estimation will be. However, due to the way Frigate's tracking algorithm works, you may need to tweak the real-world distance values so that estimated speeds better match real-world speeds.

View File

@ -593,10 +593,12 @@ def recording_clip(
clip: Recordings clip: Recordings
for clip in recordings: for clip in recordings:
file.write(f"file '{clip.path}'\n") file.write(f"file '{clip.path}'\n")
# if this is the starting clip, add an inpoint # if this is the starting clip, add an inpoint
if clip.start_time < start_ts: if clip.start_time < start_ts:
file.write(f"inpoint {int(start_ts - clip.start_time)}\n") file.write(f"inpoint {int(start_ts - clip.start_time)}\n")
# if this is the ending clip, add an outpoint
# if this is the ending clip and end trim is enabled, add an outpoint
if clip.end_time > end_ts: if clip.end_time > end_ts:
file.write(f"outpoint {int(end_ts - clip.start_time)}\n") file.write(f"outpoint {int(end_ts - clip.start_time)}\n")
@ -641,7 +643,12 @@ def recording_clip(
@router.get("/vod/{camera_name}/start/{start_ts}/end/{end_ts}") @router.get("/vod/{camera_name}/start/{start_ts}/end/{end_ts}")
def vod_ts(camera_name: str, start_ts: float, end_ts: float): def vod_ts(camera_name: str, start_ts: float, end_ts: float):
recordings = ( recordings = (
Recordings.select(Recordings.path, Recordings.duration, Recordings.end_time) Recordings.select(
Recordings.path,
Recordings.duration,
Recordings.end_time,
Recordings.start_time,
)
.where( .where(
Recordings.start_time.between(start_ts, end_ts) Recordings.start_time.between(start_ts, end_ts)
| Recordings.end_time.between(start_ts, end_ts) | Recordings.end_time.between(start_ts, end_ts)
@ -661,13 +668,18 @@ def vod_ts(camera_name: str, start_ts: float, end_ts: float):
clip = {"type": "source", "path": recording.path} clip = {"type": "source", "path": recording.path}
duration = int(recording.duration * 1000) duration = int(recording.duration * 1000)
# Determine if we need to end the last clip early # adjust start offset if start_ts is after recording.start_time
if start_ts > recording.start_time:
inpoint = int((start_ts - recording.start_time) * 1000)
clip["clipFrom"] = inpoint
duration -= inpoint
# adjust end if recording.end_time is after end_ts
if recording.end_time > end_ts: if recording.end_time > end_ts:
duration -= int((recording.end_time - end_ts) * 1000) duration -= int((recording.end_time - end_ts) * 1000)
if duration == 0: if duration <= 0:
# this means the segment starts right at the end of the requested time range # skip if the clip has no valid duration
# and it does not need to be included
continue continue
if 0 < duration < max_duration_ms: if 0 < duration < max_duration_ms:

View File

@ -282,9 +282,13 @@ class CameraState:
} }
new_obj.thumbnail_data = thumbnail_data new_obj.thumbnail_data = thumbnail_data
tracked_objects[id].thumbnail_data = thumbnail_data tracked_objects[id].thumbnail_data = thumbnail_data
self.best_objects[new_obj.obj_data["label"]] = new_obj object_type = new_obj.obj_data["label"]
self.best_objects[object_type] = new_obj
# call event handlers # call event handlers
for c in self.callbacks["snapshot"]:
c(self.name, self.best_objects[object_type], frame_name)
for c in self.callbacks["start"]: for c in self.callbacks["start"]:
c(self.name, new_obj, frame_name) c(self.name, new_obj, frame_name)

View File

@ -78,7 +78,7 @@ class FaceRecognitionConfig(FrigateBaseModel):
le=1.0, le=1.0,
) )
min_area: int = Field( min_area: int = Field(
default=500, title="Min area of face box to consider running face recognition." default=750, title="Min area of face box to consider running face recognition."
) )
save_attempts: int = Field( save_attempts: int = Field(
default=100, ge=0, title="Number of face attempts to save in the train tab." default=100, ge=0, title="Number of face attempts to save in the train tab."
@ -91,7 +91,7 @@ class FaceRecognitionConfig(FrigateBaseModel):
class CameraFaceRecognitionConfig(FrigateBaseModel): class CameraFaceRecognitionConfig(FrigateBaseModel):
enabled: bool = Field(default=False, title="Enable face recognition.") enabled: bool = Field(default=False, title="Enable face recognition.")
min_area: int = Field( min_area: int = Field(
default=500, title="Min area of face box to consider running face recognition." default=750, title="Min area of face box to consider running face recognition."
) )
model_config = ConfigDict(extra="forbid", protected_namespaces=()) model_config = ConfigDict(extra="forbid", protected_namespaces=())

View File

@ -1171,7 +1171,20 @@ class PtzAutoTracker:
zoom_predicted_movement_time = 0 zoom_predicted_movement_time = 0
if np.any(average_velocity): if np.any(average_velocity):
zoom_predicted_movement_time = abs(zoom) * self.zoom_time[camera] # Calculate the intended change in zoom level
zoom_change = (1 - abs(zoom)) * (1 if zoom >= 0 else -1)
# Calculate new zoom level and clamp to [0, 1]
new_zoom = max(
0, min(1, self.ptz_metrics[camera].zoom_level.value + zoom_change)
)
# Calculate the actual zoom distance
zoom_distance = abs(
new_zoom - self.ptz_metrics[camera].zoom_level.value
)
zoom_predicted_movement_time = zoom_distance * self.zoom_time[camera]
zoom_predicted_box = ( zoom_predicted_box = (
predicted_box predicted_box
@ -1188,7 +1201,7 @@ class PtzAutoTracker:
tilt = (0.5 - (centroid_y / camera_height)) * 2 tilt = (0.5 - (centroid_y / camera_height)) * 2
logger.debug( logger.debug(
f"{camera}: Zoom amount: {zoom}, zoom predicted time: {zoom_predicted_movement_time}, zoom predicted box: {tuple(zoom_predicted_box)}" f"{camera}: Zoom amount: {zoom}, zoom distance: {zoom_distance}, zoom predicted time: {zoom_predicted_movement_time}, zoom predicted box: {tuple(zoom_predicted_box)}"
) )
self._enqueue_move(camera, obj.obj_data["frame_time"], pan, tilt, zoom) self._enqueue_move(camera, obj.obj_data["frame_time"], pan, tilt, zoom)

View File

@ -17,6 +17,7 @@
} }
}, },
"dates": { "dates": {
"selectPreset": "Select a Preset…",
"all": { "all": {
"title": "All Dates", "title": "All Dates",
"short": "Dates" "short": "Dates"

View File

@ -43,6 +43,8 @@
"adjustAnnotationSettings": "Adjust annotation settings", "adjustAnnotationSettings": "Adjust annotation settings",
"scrollViewTips": "Scroll to view the significant moments of this object's lifecycle.", "scrollViewTips": "Scroll to view the significant moments of this object's lifecycle.",
"autoTrackingTips": "Bounding box positions will be inaccurate for autotracking cameras.", "autoTrackingTips": "Bounding box positions will be inaccurate for autotracking cameras.",
"count": "{{first}} of {{second}}",
"trackedPoint": "Tracked Point",
"lifecycleItemDesc": { "lifecycleItemDesc": {
"visible": "{{label}} detected", "visible": "{{label}} detected",
"entered_zone": "{{label}} entered {{zones}}", "entered_zone": "{{label}} entered {{zones}}",

View File

@ -525,7 +525,10 @@ export default function ObjectLifecycle({
{t("objectLifecycle.scrollViewTips")} {t("objectLifecycle.scrollViewTips")}
</div> </div>
<div className="min-w-20 text-right text-sm text-muted-foreground"> <div className="min-w-20 text-right text-sm text-muted-foreground">
{current + 1} of {eventSequence.length} {t("objectLifecycle.count", {
first: current + 1,
second: eventSequence.length,
})}
</div> </div>
</div> </div>
{config?.cameras[event.camera]?.onvif.autotracking.enabled_in_config && ( {config?.cameras[event.camera]?.onvif.autotracking.enabled_in_config && (

View File

@ -7,6 +7,7 @@ import {
} from "@/components/ui/tooltip"; } from "@/components/ui/tooltip";
import { TooltipPortal } from "@radix-ui/react-tooltip"; import { TooltipPortal } from "@radix-ui/react-tooltip";
import { getLifecycleItemDescription } from "@/utils/lifecycleUtil"; import { getLifecycleItemDescription } from "@/utils/lifecycleUtil";
import { useTranslation } from "react-i18next";
type ObjectPathProps = { type ObjectPathProps = {
positions?: Position[]; positions?: Position[];
@ -40,6 +41,7 @@ export function ObjectPath({
onPointClick, onPointClick,
visible = true, visible = true,
}: ObjectPathProps) { }: ObjectPathProps) {
const { t } = useTranslation(["views/explore"]);
const getAbsolutePositions = useCallback(() => { const getAbsolutePositions = useCallback(() => {
if (!imgRef.current || !positions) return []; if (!imgRef.current || !positions) return [];
const imgRect = imgRef.current.getBoundingClientRect(); const imgRect = imgRef.current.getBoundingClientRect();
@ -103,7 +105,7 @@ export function ObjectPath({
<TooltipContent side="top" className="smart-capitalize"> <TooltipContent side="top" className="smart-capitalize">
{pos.lifecycle_item {pos.lifecycle_item
? getLifecycleItemDescription(pos.lifecycle_item) ? getLifecycleItemDescription(pos.lifecycle_item)
: "Tracked point"} : t("objectLifecycle.trackedPoint")}
</TooltipContent> </TooltipContent>
</TooltipPortal> </TooltipPortal>
</Tooltip> </Tooltip>

View File

@ -864,16 +864,14 @@ function ObjectDetailsTab({
className={cn("flex w-full flex-row gap-2", isMobile && "flex-col")} className={cn("flex w-full flex-row gap-2", isMobile && "flex-col")}
> >
{config?.semantic_search.enabled && {config?.semantic_search.enabled &&
setSimilarity != undefined &&
search.data.type == "object" && ( search.data.type == "object" && (
<Button <Button
className="w-full" className="w-full"
aria-label={t("itemMenu.findSimilar.aria")} aria-label={t("itemMenu.findSimilar.aria")}
onClick={() => { onClick={() => {
setSearch(undefined); setSearch(undefined);
if (setSimilarity) {
setSimilarity(); setSimilarity();
}
}} }}
> >
<div className="flex gap-1"> <div className="flex gap-1">
@ -1101,7 +1099,7 @@ export function ObjectSnapshotTab({
<Tooltip> <Tooltip>
<TooltipTrigger asChild> <TooltipTrigger asChild>
<a <a
href={`${baseUrl}api/events/${search?.id}/snapshot.jpg`} href={`${baseUrl}api/events/${search?.id}/snapshot.jpg?bbox=1`}
download={`${search?.camera}_${search?.label}.jpg`} download={`${search?.camera}_${search?.label}.jpg`}
> >
<Chip className="cursor-pointer rounded-md bg-gray-500 bg-gradient-to-br from-gray-400 to-gray-500"> <Chip className="cursor-pointer rounded-md bg-gray-500 bg-gradient-to-br from-gray-400 to-gray-500">
@ -1270,7 +1268,7 @@ export function VideoTab({ search }: VideoTabProps) {
<TooltipTrigger asChild> <TooltipTrigger asChild>
<a <a
download download
href={`${baseUrl}api/${search.camera}/start/${search.start_time}/end/${endTime}/clip.mp4`} href={`${baseUrl}api/${search.camera}/start/${search.start_time}/end/${endTime}/clip.mp4?trim=end`}
> >
<Chip className="cursor-pointer rounded-md bg-gray-500 bg-gradient-to-br from-gray-400 to-gray-500"> <Chip className="cursor-pointer rounded-md bg-gray-500 bg-gradient-to-br from-gray-400 to-gray-500">
<FaDownload className="size-4 text-white" /> <FaDownload className="size-4 text-white" />

View File

@ -91,7 +91,7 @@ export function PlatformAwareSheet({
className="mx-2" className="mx-2"
onClose={() => onOpenChange(false)} onClose={() => onOpenChange(false)}
> >
<MobilePageTitle>More Filters</MobilePageTitle> <MobilePageTitle>{title}</MobilePageTitle>
</MobilePageHeader> </MobilePageHeader>
<div className={contentClassName}>{content}</div> <div className={contentClassName}>{content}</div>
</MobilePageContent> </MobilePageContent>

View File

@ -224,6 +224,7 @@ export default function SearchFilterDialog({
return ( return (
<PlatformAwareSheet <PlatformAwareSheet
trigger={trigger} trigger={trigger}
title={t("more")}
content={content} content={content}
contentClassName={cn( contentClassName={cn(
"w-auto lg:min-w-[275px] scrollbar-container h-full overflow-auto px-4", "w-auto lg:min-w-[275px] scrollbar-container h-full overflow-auto px-4",

View File

@ -369,7 +369,11 @@ export function DateRangePicker({
}} }}
> >
<SelectTrigger className="mx-auto mb-2 w-[180px]"> <SelectTrigger className="mx-auto mb-2 w-[180px]">
<SelectValue placeholder="Select..." /> <SelectValue
placeholder={t("dates.selectPreset", {
ns: "components/filter",
})}
/>
</SelectTrigger> </SelectTrigger>
<SelectContent> <SelectContent>
{PRESETS.map((preset) => ( {PRESETS.map((preset) => (

View File

@ -31,11 +31,6 @@ import {
DropdownMenuTrigger, DropdownMenuTrigger,
DropdownMenuSeparator, DropdownMenuSeparator,
} from "@/components/ui/dropdown-menu"; } from "@/components/ui/dropdown-menu";
import {
Popover,
PopoverContent,
PopoverTrigger,
} from "@/components/ui/popover";
import { Toaster } from "@/components/ui/sonner"; import { Toaster } from "@/components/ui/sonner";
import { import {
Tooltip, Tooltip,
@ -43,7 +38,6 @@ import {
TooltipTrigger, TooltipTrigger,
} from "@/components/ui/tooltip"; } from "@/components/ui/tooltip";
import useContextMenu from "@/hooks/use-contextmenu"; import useContextMenu from "@/hooks/use-contextmenu";
import { useFormattedTimestamp } from "@/hooks/use-date-utils";
import useKeyboardListener from "@/hooks/use-keyboard-listener"; import useKeyboardListener from "@/hooks/use-keyboard-listener";
import useOptimisticState from "@/hooks/use-optimistic-state"; import useOptimisticState from "@/hooks/use-optimistic-state";
import { cn } from "@/lib/utils"; import { cn } from "@/lib/utils";
@ -58,7 +52,6 @@ import { Trans, useTranslation } from "react-i18next";
import { import {
LuFolderCheck, LuFolderCheck,
LuImagePlus, LuImagePlus,
LuInfo,
LuPencil, LuPencil,
LuRefreshCw, LuRefreshCw,
LuScanFace, LuScanFace,
@ -68,6 +61,10 @@ import {
import { useNavigate } from "react-router-dom"; import { useNavigate } from "react-router-dom";
import { toast } from "sonner"; import { toast } from "sonner";
import useSWR from "swr"; import useSWR from "swr";
import SearchDetailDialog, {
SearchTab,
} from "@/components/overlay/detail/SearchDetailDialog";
import { SearchResult } from "@/types/search";
export default function FaceLibrary() { export default function FaceLibrary() {
const { t } = useTranslation(["views/faceLibrary"]); const { t } = useTranslation(["views/faceLibrary"]);
@ -663,18 +660,7 @@ function TrainingGrid({
// selection // selection
const [selectedEvent, setSelectedEvent] = useState<Event>(); const [selectedEvent, setSelectedEvent] = useState<Event>();
const [dialogTab, setDialogTab] = useState<SearchTab>("details");
const formattedDate = useFormattedTimestamp(
selectedEvent?.start_time ?? 0,
config?.ui.time_format == "24hour"
? t("time.formattedTimestampMonthDayYearHourMinute.24hour", {
ns: "common",
})
: t("time.formattedTimestampMonthDayYearHourMinute.12hour", {
ns: "common",
}),
config?.ui.timezone,
);
if (attemptImages.length == 0) { if (attemptImages.length == 0) {
return ( return (
@ -687,66 +673,16 @@ function TrainingGrid({
return ( return (
<> <>
<Dialog <SearchDetailDialog
open={selectedEvent != undefined} search={
onOpenChange={(open) => { selectedEvent ? (selectedEvent as unknown as SearchResult) : undefined
if (!open) {
setSelectedEvent(undefined);
} }
}} page={dialogTab}
> setSimilarity={undefined}
<DialogContent setSearchPage={setDialogTab}
className={cn( setSearch={(search) => setSelectedEvent(search as unknown as Event)}
"", setInputFocused={() => {}}
selectedEvent?.has_snapshot && isDesktop && "max-w-7xl",
)}
>
<DialogHeader>
<DialogTitle>{t("details.face")}</DialogTitle>
<DialogDescription>{t("details.faceDesc")}</DialogDescription>
</DialogHeader>
<div className="flex flex-col gap-1.5">
<div className="text-sm text-primary/40">{t("details.person")}</div>
<div className="text-sm smart-capitalize">
{selectedEvent?.sub_label ?? t("details.unknown")}
</div>
</div>
{selectedEvent?.data.sub_label_score && (
<div className="flex flex-col gap-1.5">
<div className="text-sm text-primary/40">
<div className="flex flex-row items-center gap-1">
{t("details.subLabelScore")}
<Popover>
<PopoverTrigger asChild>
<div className="cursor-pointer p-0">
<LuInfo className="size-4" />
<span className="sr-only">Info</span>
</div>
</PopoverTrigger>
<PopoverContent className="w-80">
{t("details.scoreInfo")}
</PopoverContent>
</Popover>
</div>
</div>
<div className="text-sm smart-capitalize">
{Math.round((selectedEvent?.data?.sub_label_score || 0) * 100)}%
</div>
</div>
)}
<div className="flex flex-col gap-1.5">
<div className="text-sm text-primary/40">
{t("details.timestamp")}
</div>
<div className="text-sm">{formattedDate}</div>
</div>
<img
className="mx-auto max-h-[60dvh] object-contain"
loading="lazy"
src={`${baseUrl}api/events/${selectedEvent?.id}/${selectedEvent?.has_snapshot ? "snapshot.jpg" : "thumbnail.jpg"}`}
/> />
</DialogContent>
</Dialog>
<div className="scrollbar-container flex flex-wrap gap-2 overflow-y-scroll p-1"> <div className="scrollbar-container flex flex-wrap gap-2 overflow-y-scroll p-1">
{Object.entries(faceGroups).map(([key, group]) => { {Object.entries(faceGroups).map(([key, group]) => {
@ -853,12 +789,19 @@ function FaceAttemptGroup({
}} }}
> >
<div className="flex flex-row justify-between"> <div className="flex flex-row justify-between">
<div className="flex flex-col gap-1">
<div className="select-none smart-capitalize"> <div className="select-none smart-capitalize">
Person Person
{event?.sub_label {event?.sub_label
? `: ${event.sub_label} (${Math.round((event.data.sub_label_score || 0) * 100)}%)` ? `: ${event.sub_label} (${Math.round((event.data.sub_label_score || 0) * 100)}%)`
: ": " + t("details.unknown")} : ": " + t("details.unknown")}
</div> </div>
<TimeAgo
className="text-sm text-secondary-foreground"
time={group[0].timestamp * 1000}
dense
/>
</div>
{event && ( {event && (
<Tooltip> <Tooltip>
<TooltipTrigger> <TooltipTrigger>
@ -950,6 +893,14 @@ function FaceAttempt({
onClick(data, true); onClick(data, true);
}); });
const imageArea = useMemo(() => {
if (!imgRef.current) {
return undefined;
}
return imgRef.current.naturalWidth * imgRef.current.naturalHeight;
}, [imgRef]);
// api calls // api calls
const onTrainAttempt = useCallback( const onTrainAttempt = useCallback(
@ -1021,13 +972,11 @@ function FaceAttempt({
onClick(data, e.metaKey || e.ctrlKey); onClick(data, e.metaKey || e.ctrlKey);
}} }}
/> />
{imageArea != undefined && (
<div className="absolute bottom-1 right-1 z-10 rounded-lg bg-black/50 px-2 py-1 text-xs text-white"> <div className="absolute bottom-1 right-1 z-10 rounded-lg bg-black/50 px-2 py-1 text-xs text-white">
<TimeAgo {imageArea}px
className="text-white"
time={data.timestamp * 1000}
dense
/>
</div> </div>
)}
</div> </div>
<div className="select-none p-2"> <div className="select-none p-2">
<div className="flex w-full flex-row items-center justify-between gap-2"> <div className="flex w-full flex-row items-center justify-between gap-2">

View File

@ -631,6 +631,7 @@ export default function LiveCameraView({
<div className="flex flex-col items-center justify-center"> <div className="flex flex-col items-center justify-center">
<PtzControlPanel <PtzControlPanel
camera={camera.name} camera={camera.name}
enabled={cameraEnabled}
clickOverlay={clickOverlay} clickOverlay={clickOverlay}
setClickOverlay={setClickOverlay} setClickOverlay={setClickOverlay}
/> />
@ -689,15 +690,19 @@ function TooltipButton({
function PtzControlPanel({ function PtzControlPanel({
camera, camera,
enabled,
clickOverlay, clickOverlay,
setClickOverlay, setClickOverlay,
}: { }: {
camera: string; camera: string;
enabled: boolean;
clickOverlay: boolean; clickOverlay: boolean;
setClickOverlay: React.Dispatch<React.SetStateAction<boolean>>; setClickOverlay: React.Dispatch<React.SetStateAction<boolean>>;
}) { }) {
const { t } = useTranslation(["views/live"]); const { t } = useTranslation(["views/live"]);
const { data: ptz } = useSWR<CameraPtzInfo>(`${camera}/ptz/info`); const { data: ptz } = useSWR<CameraPtzInfo>(
enabled ? `${camera}/ptz/info` : null,
);
const { send: sendPtz } = usePtzCommand(camera); const { send: sendPtz } = usePtzCommand(camera);