mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-07-26 13:47:03 +02:00
Fixes (#18500)
* fix i18n keys * hide disable from context menu for viewers * Fix auto live check for default dashboard and camera groups Disabling the Automatic Live View switch in Settings should prevent streaming from occurring. Overriding any settings in a camera group will override the global setting. The check here incorrectly always returned false instead of undefined. * clarify hardware accelerated enrichments * clarify * add note about detect stream to face rec docs * add note about low end Dahuas for autotracking * Catch invalid face box / image * Video tab tweaks With the changes in https://github.com/blakeblackshear/frigate/pull/18220, the video tab in the Tracked Object Details pane now correctly trims the in-browser HLS video. Because of keyframes and record/detect stream differences, we can manually subtract a couple of seconds from the event start_time to ensure the first few frames aren't cut off from the video * Clarify * Don't use Migraphx by default * Provide better support for running embeddings on GPU * correctly join cameras * Adjust blur confidence reduction --------- Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
This commit is contained in:
parent
af5a9e7634
commit
dba9206898
@ -63,6 +63,7 @@ COPY --from=rocm /opt/rocm-dist/ /
|
||||
FROM deps-prelim AS rocm-prelim-hsa-override0
|
||||
ENV HSA_ENABLE_SDMA=0
|
||||
ENV MIGRAPHX_ENABLE_NHWC=1
|
||||
ENV TF_ROCM_USE_IMMEDIATE_MODE=1
|
||||
|
||||
COPY --from=rocm-dist / /
|
||||
|
||||
|
@ -98,7 +98,7 @@ This list of working and non-working PTZ cameras is based on user feedback.
|
||||
| Amcrest IP4M-S2112EW-AI | ✅ | ❌ | FOV relative movement not supported. |
|
||||
| Amcrest IP5M-1190EW | ✅ | ❌ | ONVIF Port: 80. FOV relative movement not supported. |
|
||||
| Ctronics PTZ | ✅ | ❌ | |
|
||||
| Dahua | ✅ | ✅ | |
|
||||
| Dahua | ✅ | ✅ | Some low-end Dahuas (lite series, among others) have been reported to not support autotracking |
|
||||
| Dahua DH-SD2A500HB | ✅ | ❌ | |
|
||||
| Foscam R5 | ✅ | ❌ | |
|
||||
| Hanwha XNP-6550RH | ✅ | ❌ | |
|
||||
|
@ -45,6 +45,8 @@ face_recognition:
|
||||
enabled: true
|
||||
```
|
||||
|
||||
Like the other real-time processors in Frigate, face recognition runs on the camera stream defined by the `detect` role in your config. To ensure optimal performance, select a suitable resolution for this stream in your camera's firmware that fits your specific scene and requirements.
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
Fine-tune face recognition with these optional parameters at the global level of your config. The only optional parameters that can be set at the camera level are `enabled` and `min_area`.
|
||||
|
@ -23,7 +23,7 @@ Object detection and enrichments (like Semantic Search, Face Recognition, and Li
|
||||
- Nvidia GPUs will automatically be detected and used for enrichments in the `-tensorrt` Frigate image.
|
||||
- Jetson devices will automatically be detected and used for enrichments in the `-tensorrt-jp6` Frigate image.
|
||||
|
||||
Utilizing a GPU for enrichments does not require you to use the same GPU for object detection. For example, you can run the `tensorrt` Docker image for enrichments and still use other dedicated hardware for object detection.
|
||||
Utilizing a GPU for enrichments does not require you to use the same GPU for object detection. For example, you can run the `tensorrt` Docker image for enrichments and still use other dedicated hardware like a Coral or Hailo for object detection. However, one combination that is not supported is TensorRT for object detection and OpenVINO for enrichments.
|
||||
|
||||
:::note
|
||||
|
||||
|
@ -189,7 +189,12 @@ Frigate provides a dialog in the Camera Group Edit pane with several options for
|
||||
|
||||
:::note
|
||||
|
||||
The default dashboard ("All Cameras") will always use Smart Streaming and the first entry set in your `streams` configuration, if defined. Use a camera group if you want to change any of these settings from the defaults.
|
||||
The default dashboard ("All Cameras") will always use:
|
||||
|
||||
- Smart Streaming, unless you've disabled the global Automatic Live View in Settings.
|
||||
- The first entry set in your `streams` configuration, if defined.
|
||||
|
||||
Use a camera group if you want to change any of these settings from the defaults.
|
||||
|
||||
:::
|
||||
|
||||
|
@ -104,4 +104,4 @@ Lightning threshold does not stop motion based recordings from being saved.
|
||||
|
||||
:::
|
||||
|
||||
Large changes in motion like PTZ moves and camera switches between Color and IR mode should result in no motion detection. This is done via the `lightning_threshold` configuration. It is defined as the percentage of the image used to detect lightning or other substantial changes where motion detection needs to recalibrate. Increasing this value will make motion detection more likely to consider lightning or IR mode changes as valid motion. Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching a doorbell camera.
|
||||
Large changes in motion like PTZ moves and camera switches between Color and IR mode should result in a pause in object detection. This is done via the `lightning_threshold` configuration. It is defined as the percentage of the image used to detect lightning or other substantial changes where motion detection needs to recalibrate. Increasing this value will make motion detection more likely to consider lightning or IR mode changes as valid motion. Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching a doorbell camera.
|
||||
|
@ -154,7 +154,25 @@ def train_face(request: Request, name: str, body: dict = None):
|
||||
x2 = x1 + int(face_box[2] * detect_config.width) - 4
|
||||
y2 = y1 + int(face_box[3] * detect_config.height) - 4
|
||||
face = snapshot[y1:y2, x1:x2]
|
||||
cv2.imwrite(os.path.join(new_file_folder, new_name), face)
|
||||
success = True
|
||||
|
||||
if face.size > 0:
|
||||
try:
|
||||
cv2.imwrite(os.path.join(new_file_folder, new_name), face)
|
||||
success = True
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if not success:
|
||||
return JSONResponse(
|
||||
content=(
|
||||
{
|
||||
"success": False,
|
||||
"message": "Invalid face box or no face exists",
|
||||
}
|
||||
),
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
context: EmbeddingsContext = request.app.embeddings
|
||||
context.clear_face_classifier()
|
||||
|
@ -108,21 +108,21 @@ class FaceRecognizer(ABC):
|
||||
image, M, (output_width, output_height), flags=cv2.INTER_CUBIC
|
||||
)
|
||||
|
||||
def get_blur_factor(self, input: np.ndarray) -> float:
|
||||
"""Calculates the factor for the confidence based on the blur of the image."""
|
||||
def get_blur_confidence_reduction(self, input: np.ndarray) -> tuple[float, float]:
|
||||
"""Calculates the reduction in confidence based on the blur of the image."""
|
||||
if not self.config.face_recognition.blur_confidence_filter:
|
||||
return 1.0
|
||||
|
||||
variance = cv2.Laplacian(input, cv2.CV_64F).var()
|
||||
|
||||
if variance < 60: # image is very blurry
|
||||
return 0.96
|
||||
elif variance < 70: # image moderately blurry
|
||||
return 0.98
|
||||
elif variance < 80: # image is slightly blurry
|
||||
return 0.99
|
||||
if variance < 80: # image is very blurry
|
||||
return variance, 0.05
|
||||
elif variance < 100: # image moderately blurry
|
||||
return variance, 0.03
|
||||
elif variance < 150: # image is slightly blurry
|
||||
return variance, 0.01
|
||||
else:
|
||||
return 1.0
|
||||
return variance, 0.0
|
||||
|
||||
|
||||
def similarity_to_confidence(
|
||||
@ -234,8 +234,8 @@ class FaceNetRecognizer(FaceRecognizer):
|
||||
# face recognition is best run on grayscale images
|
||||
|
||||
# get blur factor before aligning face
|
||||
blur_factor = self.get_blur_factor(face_image)
|
||||
logger.debug(f"face detected with blurriness {blur_factor}")
|
||||
variance, blur_reduction = self.get_blur_confidence_reduction(face_image)
|
||||
logger.debug(f"face detected with blurriness {variance}")
|
||||
|
||||
# align face and run recognition
|
||||
img = self.align_face(face_image, face_image.shape[1], face_image.shape[0])
|
||||
@ -258,7 +258,7 @@ class FaceNetRecognizer(FaceRecognizer):
|
||||
score = confidence
|
||||
label = name
|
||||
|
||||
return label, round(score * blur_factor, 2)
|
||||
return label, round(score - blur_reduction, 2)
|
||||
|
||||
|
||||
class ArcFaceRecognizer(FaceRecognizer):
|
||||
@ -344,9 +344,9 @@ class ArcFaceRecognizer(FaceRecognizer):
|
||||
|
||||
# face recognition is best run on grayscale images
|
||||
|
||||
# get blur factor before aligning face
|
||||
blur_factor = self.get_blur_factor(face_image)
|
||||
logger.debug(f"face detected with blurriness {blur_factor}")
|
||||
# get blur reduction before aligning face
|
||||
variance, blur_reduction = self.get_blur_confidence_reduction(face_image)
|
||||
logger.debug(f"face detected with blurriness {variance}")
|
||||
|
||||
# align face and run recognition
|
||||
img = self.align_face(face_image, face_image.shape[1], face_image.shape[0])
|
||||
@ -367,4 +367,4 @@ class ArcFaceRecognizer(FaceRecognizer):
|
||||
score = confidence
|
||||
label = name
|
||||
|
||||
return label, round(score * blur_factor, 2)
|
||||
return label, round(score - blur_reduction, 2)
|
||||
|
@ -345,6 +345,13 @@ def get_ort_providers(
|
||||
"device_type": device,
|
||||
}
|
||||
)
|
||||
elif provider == "MIGraphXExecutionProvider":
|
||||
# MIGraphX uses more CPU than ROCM, while also being the same speed
|
||||
if device == "MIGraphX":
|
||||
providers.append(provider)
|
||||
options.append({})
|
||||
else:
|
||||
continue
|
||||
elif provider == "CPUExecutionProvider":
|
||||
providers.append(provider)
|
||||
options.append(
|
||||
|
@ -46,6 +46,7 @@ import {
|
||||
} from "@/api/ws";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import { useDateLocale } from "@/hooks/use-date-locale";
|
||||
import { useIsAdmin } from "@/hooks/use-is-admin";
|
||||
|
||||
type LiveContextMenuProps = {
|
||||
className?: string;
|
||||
@ -90,6 +91,10 @@ export default function LiveContextMenu({
|
||||
const { t } = useTranslation("views/live");
|
||||
const [showSettings, setShowSettings] = useState(false);
|
||||
|
||||
// roles
|
||||
|
||||
const isAdmin = useIsAdmin();
|
||||
|
||||
// camera enabled
|
||||
|
||||
const { payload: enabledState, send: sendEnabled } = useEnabledState(camera);
|
||||
@ -301,17 +306,21 @@ export default function LiveContextMenu({
|
||||
</>
|
||||
)}
|
||||
<ContextMenuSeparator />
|
||||
<ContextMenuItem>
|
||||
<div
|
||||
className="flex w-full cursor-pointer items-center justify-start gap-2"
|
||||
onClick={() => sendEnabled(isEnabled ? "OFF" : "ON")}
|
||||
>
|
||||
<div className="text-primary">
|
||||
{isEnabled ? t("camera.disable") : t("camera.enable")}
|
||||
</div>
|
||||
</div>
|
||||
</ContextMenuItem>
|
||||
<ContextMenuSeparator />
|
||||
{isAdmin && (
|
||||
<>
|
||||
<ContextMenuItem>
|
||||
<div
|
||||
className="flex w-full cursor-pointer items-center justify-start gap-2"
|
||||
onClick={() => sendEnabled(isEnabled ? "OFF" : "ON")}
|
||||
>
|
||||
<div className="text-primary">
|
||||
{isEnabled ? t("camera.disable") : t("camera.enable")}
|
||||
</div>
|
||||
</div>
|
||||
</ContextMenuItem>
|
||||
<ContextMenuSeparator />
|
||||
</>
|
||||
)}
|
||||
<ContextMenuItem disabled={!isEnabled}>
|
||||
<div
|
||||
className="flex w-full cursor-pointer items-center justify-start gap-2"
|
||||
|
@ -1231,7 +1231,9 @@ export function VideoTab({ search }: VideoTabProps) {
|
||||
]);
|
||||
const endTime = useMemo(() => search.end_time ?? Date.now() / 1000, [search]);
|
||||
|
||||
const source = `${baseUrl}vod/${search.camera}/start/${search.start_time}/end/${endTime}/index.m3u8`;
|
||||
// subtract 2 seconds from start_time to account for keyframes and any differences in the record/detect streams
|
||||
// to help the start of the event from not being completely cut off
|
||||
const source = `${baseUrl}vod/${search.camera}/start/${search.start_time - 2}/end/${endTime}/index.m3u8`;
|
||||
|
||||
return (
|
||||
<>
|
||||
|
@ -708,8 +708,8 @@ export default function ZoneEditPane({
|
||||
{
|
||||
unit:
|
||||
config?.ui.unit_system == "imperial"
|
||||
? t("feet", { ns: "common" })
|
||||
: t("meters", { ns: "common" }),
|
||||
? t("unit.length.feet", { ns: "common" })
|
||||
: t("unit.length.meters", { ns: "common" }),
|
||||
},
|
||||
)}
|
||||
</FormLabel>
|
||||
@ -735,8 +735,8 @@ export default function ZoneEditPane({
|
||||
{
|
||||
unit:
|
||||
config?.ui.unit_system == "imperial"
|
||||
? t("feet", { ns: "common" })
|
||||
: t("meters", { ns: "common" }),
|
||||
? t("unit.length.feet", { ns: "common" })
|
||||
: t("unit.length.meters", { ns: "common" }),
|
||||
},
|
||||
)}
|
||||
</FormLabel>
|
||||
@ -762,8 +762,8 @@ export default function ZoneEditPane({
|
||||
{
|
||||
unit:
|
||||
config?.ui.unit_system == "imperial"
|
||||
? t("feet", { ns: "common" })
|
||||
: t("meters", { ns: "common" }),
|
||||
? t("unit.length.feet", { ns: "common" })
|
||||
: t("unit.length.meters", { ns: "common" }),
|
||||
},
|
||||
)}
|
||||
</FormLabel>
|
||||
@ -789,8 +789,8 @@ export default function ZoneEditPane({
|
||||
{
|
||||
unit:
|
||||
config?.ui.unit_system == "imperial"
|
||||
? t("feet", { ns: "common" })
|
||||
: t("meters", { ns: "common" }),
|
||||
? t("unit.length.feet", { ns: "common" })
|
||||
: t("unit.length.meters", { ns: "common" }),
|
||||
},
|
||||
)}
|
||||
</FormLabel>
|
||||
|
@ -563,9 +563,12 @@ export default function DraggableGridLayout({
|
||||
const streamName = streamExists
|
||||
? streamNameFromSettings
|
||||
: firstStreamEntry;
|
||||
const streamType =
|
||||
currentGroupStreamingSettings?.[camera.name]?.streamType;
|
||||
const autoLive =
|
||||
currentGroupStreamingSettings?.[camera.name]?.streamType !==
|
||||
"no-streaming";
|
||||
streamType !== undefined
|
||||
? streamType !== "no-streaming"
|
||||
: undefined;
|
||||
const showStillWithoutActivity =
|
||||
currentGroupStreamingSettings?.[camera.name]?.streamType !==
|
||||
"continuous";
|
||||
|
@ -486,9 +486,12 @@ export default function LiveDashboardView({
|
||||
const streamName = streamExists
|
||||
? streamNameFromSettings
|
||||
: firstStreamEntry;
|
||||
const streamType =
|
||||
currentGroupStreamingSettings?.[camera.name]?.streamType;
|
||||
const autoLive =
|
||||
currentGroupStreamingSettings?.[camera.name]?.streamType !==
|
||||
"no-streaming";
|
||||
streamType !== undefined
|
||||
? streamType !== "no-streaming"
|
||||
: undefined;
|
||||
const showStillWithoutActivity =
|
||||
currentGroupStreamingSettings?.[camera.name]?.streamType !==
|
||||
"continuous";
|
||||
|
@ -91,7 +91,7 @@ export function RecordingView({
|
||||
"recordings/summary",
|
||||
{
|
||||
timezone: timezone,
|
||||
cameras: allCameras ?? null,
|
||||
cameras: allCameras.join(",") ?? null,
|
||||
},
|
||||
]);
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user