mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-03-07 02:18:07 +01:00
* Update version * Create scaffolding for case management (#21293) * implement case management for export apis (#21295) * refactor vainfo to search for first GPU (#21296) use existing LibvaGpuSelector to pick appropritate libva device * Case management UI (#21299) * Refactor export cards to match existing cards in other UI pages * Show cases separately from exports * Add proper filtering and display of cases * Add ability to edit and select cases for exports * Cleanup typing * Hide if no unassigned * Cleanup hiding logic * fix scrolling * Improve layout * Camera connection quality indicator (#21297) * add camera connection quality metrics and indicator * formatting * move stall calcs to watchdog * clean up * change watchdog to 1s and separately track time for ffmpeg retry_interval * implement status caching to reduce message volume * Export filter UI (#21322) * Get started on export filters * implement basic filter * Implement filtering and adjust api * Improve filter handling * Improve navigation * Cleanup * handle scrolling * Refactor temperature reporting for detectors and implement Hailo temp reading (#21395) * Add Hailo temperature retrieval * Refactor `get_hailo_temps()` to use ctxmanager * Show Hailo temps in system UI * Move hailo_platform import to get_hailo_temps * Refactor temperatures calculations to use within detector block * Adjust webUI to handle new location --------- Co-authored-by: tigattack <10629864+tigattack@users.noreply.github.com> * Camera-specific hwaccel settings for timelapse exports (correct base) (#21386) * added hwaccel_args to camera.record.export config struct * populate camera.record.export.hwaccel_args with a cascade up to camera then global if 'auto' * use new hwaccel args in export * added documentation for camera-specific hwaccel export * fix c/p error * missed an import * fleshed out the docs and comments a bit * ruff lint * separated out the tips in the doc * fix documentation * fix and simplify reference config doc * Add support for GPU and NPU temperatures (#21495) * Add rockchip temps * Add support for GPU and NPU temperatures in the frontend * Add support for Nvidia temperature * Improve separation * Adjust graph scaling * Exports Improvements (#21521) * Add images to case folder view * Add ability to select case in export dialog * Add to mobile review too * Add API to handle deleting recordings (#21520) * Add recording delete API * Re-organize recordings apis * Fix import * Consolidate query types * Add media sync API endpoint (#21526) * add media cleanup functions * add endpoint * remove scheduled sync recordings from cleanup * move to utils dir * tweak import * remove sync_recordings and add config migrator * remove sync_recordings * docs * remove key * clean up docs * docs fix * docs tweak * Media sync API refactor and UI (#21542) * generic job infrastructure * types and dispatcher changes for jobs * save data in memory only for completed jobs * implement media sync job and endpoints * change logs to debug * websocket hook and types * frontend * i18n * docs tweaks * endpoint descriptions * tweak docs * use same logging pattern in sync_recordings as the other sync functions (#21625) * Fix incorrect counting in sync_recordings (#21626) * Update go2rtc to v1.9.13 (#21648) Co-authored-by: Eugeny Tulupov <eugeny.tulupov@spirent.com> * Refactor Time-Lapse Export (#21668) * refactor time lapse creation to be a separate API call with ability to pass arbitrary ffmpeg args * Add CPU fallback * Optimize empty directory cleanup for recordings (#21695) The previous empty directory cleanup did a full recursive directory walk, which can be extremely slow. This new implementation only removes directories which have a chance of being empty due to a recent file deletion. * Implement llama.cpp GenAI Provider (#21690) * Implement llama.cpp GenAI Provider * Add docs * Update links * Fix broken mqtt links * Fix more broken anchors * Remove parents in remove_empty_directories (#21726) The original implementation did a full directory tree walk to find and remove empty directories, so this implementation should remove the parents as well, like the original did. * Implement LLM Chat API with tool calling support (#21731) * Implement initial tools definiton APIs * Add initial chat completion API with tool support * Implement other providers * Cleanup * Offline preview image (#21752) * use latest preview frame for latest image when camera is offline * remove frame extraction logic * tests * frontend * add description to api endpoint * Update to ROCm 7.2.0 (#21753) * Update to ROCm 7.2.0 * ROCm now works properly with JinaV1 * Arcface has compilation error * Add live context tool to LLM (#21754) * Add live context tool * Improve handling of images in request * Improve prompt caching * Add networking options for configuring listening ports (#21779) * feat: add X-Frame-Time when returning snapshot (#21932) Co-authored-by: Florent MORICONI <170678386+fmcloudconsulting@users.noreply.github.com> * Improve jsmpeg player websocket handling (#21943) * improve jsmpeg player websocket handling prevent websocket console messages from appearing when player is destroyed * reformat files after ruff upgrade * Allow API Events to be Detections or Alerts, depending on the Event Label (#21923) * - API created events will be alerts OR detections, depending on the event label, defaulting to alerts - Indefinite API events will extend the recording segment until those events are ended - API event start time is the actual start time, instead of having a pre-buffer of record.event_pre_capture * Instead of checking for indefinite events on a camera before deciding if we should end the segment, only update last_detection_time and last_alert_time if frame_time is greater, which should have the same effect * Add the ability to set a pre_capture number of seconds when creating a manual event via the API. Default behavior unchanged * Remove unnecessary _publish_segment_start() call * Formatting * handle last_alert_time or last_detection_time being None when checking them against the frame_time * comment manual_info["label"].split(": ")[0] for clarity * ffmpeg Preview Segment Optimization for "high" and "very_high" (#21996) * Introduce qmax parameter for ffmpeg preview encoding Added PREVIEW_QMAX_PARAM to control ffmpeg encoding quality. * formatting * Fix spacing in qmax parameters for preview quality * Adapt to new Gemini format * Fix frame time access * Remove exceptions * Cleanup --------- Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com> Co-authored-by: tigattack <10629864+tigattack@users.noreply.github.com> Co-authored-by: Andrew Roberts <adroberts@gmail.com> Co-authored-by: Eugeny Tulupov <zhekka3@gmail.com> Co-authored-by: Eugeny Tulupov <eugeny.tulupov@spirent.com> Co-authored-by: John Shaw <1753078+johnshaw@users.noreply.github.com> Co-authored-by: Eric Work <work.eric@gmail.com> Co-authored-by: FL42 <46161216+fl42@users.noreply.github.com> Co-authored-by: Florent MORICONI <170678386+fmcloudconsulting@users.noreply.github.com> Co-authored-by: nulledy <254504350+nulledy@users.noreply.github.com>
506 lines
18 KiB
Python
506 lines
18 KiB
Python
"""Utilities for stats."""
|
|
|
|
import asyncio
|
|
import os
|
|
import shutil
|
|
import time
|
|
from json import JSONDecodeError
|
|
from multiprocessing.managers import DictProxy
|
|
from typing import Any, Optional
|
|
|
|
import requests
|
|
from requests.exceptions import RequestException
|
|
|
|
from frigate.config import FrigateConfig
|
|
from frigate.const import CACHE_DIR, CLIPS_DIR, RECORD_DIR
|
|
from frigate.data_processing.types import DataProcessorMetrics
|
|
from frigate.object_detection.base import ObjectDetectProcess
|
|
from frigate.types import StatsTrackingTypes
|
|
from frigate.util.services import (
|
|
calculate_shm_requirements,
|
|
get_amd_gpu_stats,
|
|
get_bandwidth_stats,
|
|
get_cpu_stats,
|
|
get_fs_type,
|
|
get_hailo_temps,
|
|
get_intel_gpu_stats,
|
|
get_jetson_stats,
|
|
get_nvidia_gpu_stats,
|
|
get_openvino_npu_stats,
|
|
get_rockchip_gpu_stats,
|
|
get_rockchip_npu_stats,
|
|
is_vaapi_amd_driver,
|
|
)
|
|
from frigate.version import VERSION
|
|
|
|
|
|
def get_latest_version(config: FrigateConfig) -> str:
|
|
if not config.telemetry.version_check:
|
|
return "disabled"
|
|
|
|
try:
|
|
request = requests.get(
|
|
"https://api.github.com/repos/blakeblackshear/frigate/releases/latest",
|
|
timeout=10,
|
|
)
|
|
response = request.json()
|
|
except (RequestException, JSONDecodeError):
|
|
return "unknown"
|
|
|
|
if request.ok and response and "tag_name" in response:
|
|
return str(response.get("tag_name").replace("v", ""))
|
|
else:
|
|
return "unknown"
|
|
|
|
|
|
def stats_init(
|
|
config: FrigateConfig,
|
|
camera_metrics: DictProxy,
|
|
embeddings_metrics: DataProcessorMetrics | None,
|
|
detectors: dict[str, ObjectDetectProcess],
|
|
processes: dict[str, int],
|
|
) -> StatsTrackingTypes:
|
|
stats_tracking: StatsTrackingTypes = {
|
|
"camera_metrics": camera_metrics,
|
|
"embeddings_metrics": embeddings_metrics,
|
|
"detectors": detectors,
|
|
"started": int(time.time()),
|
|
"latest_frigate_version": get_latest_version(config),
|
|
"last_updated": int(time.time()),
|
|
"processes": processes,
|
|
}
|
|
return stats_tracking
|
|
|
|
|
|
def read_temperature(path: str) -> Optional[float]:
|
|
if os.path.isfile(path):
|
|
with open(path) as f:
|
|
line = f.readline().strip()
|
|
return int(line) / 1000
|
|
return None
|
|
|
|
|
|
def get_temperatures() -> dict[str, float]:
|
|
temps = {}
|
|
|
|
# Get temperatures for all attached Corals
|
|
base = "/sys/class/apex/"
|
|
if os.path.isdir(base):
|
|
for apex in os.listdir(base):
|
|
temp = read_temperature(os.path.join(base, apex, "temp"))
|
|
if temp is not None:
|
|
temps[apex] = temp
|
|
|
|
# Get temperatures for Hailo devices
|
|
temps.update(get_hailo_temps())
|
|
|
|
return temps
|
|
|
|
|
|
def get_detector_temperature(
|
|
detector_type: str,
|
|
detector_index_by_type: dict[str, int],
|
|
) -> Optional[float]:
|
|
"""Get temperature for a specific detector based on its type."""
|
|
if detector_type == "edgetpu":
|
|
# Get temperatures for all attached Corals
|
|
base = "/sys/class/apex/"
|
|
if os.path.isdir(base):
|
|
apex_devices = sorted(os.listdir(base))
|
|
index = detector_index_by_type.get("edgetpu", 0)
|
|
if index < len(apex_devices):
|
|
apex_name = apex_devices[index]
|
|
temp = read_temperature(os.path.join(base, apex_name, "temp"))
|
|
if temp is not None:
|
|
return temp
|
|
elif detector_type == "hailo8l":
|
|
# Get temperatures for Hailo devices
|
|
hailo_temps = get_hailo_temps()
|
|
if hailo_temps:
|
|
hailo_device_names = sorted(hailo_temps.keys())
|
|
index = detector_index_by_type.get("hailo8l", 0)
|
|
if index < len(hailo_device_names):
|
|
device_name = hailo_device_names[index]
|
|
return hailo_temps[device_name]
|
|
elif detector_type == "rknn":
|
|
# Rockchip temperatures are handled by the GPU / NPU stats
|
|
# as there are not detector specific temperatures
|
|
pass
|
|
|
|
return None
|
|
|
|
|
|
def get_detector_stats(
|
|
stats_tracking: StatsTrackingTypes,
|
|
) -> dict[str, dict[str, Any]]:
|
|
"""Get stats for all detectors, including temperatures based on detector type."""
|
|
detector_stats: dict[str, dict[str, Any]] = {}
|
|
detector_type_indices: dict[str, int] = {}
|
|
|
|
for name, detector in stats_tracking["detectors"].items():
|
|
pid = detector.detect_process.pid if detector.detect_process else None
|
|
detector_type = detector.detector_config.type
|
|
|
|
# Keep track of the index for each detector type to match temperatures correctly
|
|
current_index = detector_type_indices.get(detector_type, 0)
|
|
detector_type_indices[detector_type] = current_index + 1
|
|
|
|
detector_stat = {
|
|
"inference_speed": round(detector.avg_inference_speed.value * 1000, 2), # type: ignore[attr-defined]
|
|
# issue https://github.com/python/typeshed/issues/8799
|
|
# from mypy 0.981 onwards
|
|
"detection_start": detector.detection_start.value, # type: ignore[attr-defined]
|
|
# issue https://github.com/python/typeshed/issues/8799
|
|
# from mypy 0.981 onwards
|
|
"pid": pid,
|
|
}
|
|
|
|
temp = get_detector_temperature(detector_type, {detector_type: current_index})
|
|
|
|
if temp is not None:
|
|
detector_stat["temperature"] = round(temp, 1)
|
|
|
|
detector_stats[name] = detector_stat
|
|
|
|
return detector_stats
|
|
|
|
|
|
def get_processing_stats(
|
|
config: FrigateConfig, stats: dict[str, str], hwaccel_errors: list[str]
|
|
) -> None:
|
|
"""Get stats for cpu / gpu."""
|
|
|
|
async def run_tasks() -> None:
|
|
stats_tasks = [
|
|
asyncio.create_task(set_gpu_stats(config, stats, hwaccel_errors)),
|
|
asyncio.create_task(set_cpu_stats(stats)),
|
|
asyncio.create_task(set_npu_usages(config, stats)),
|
|
]
|
|
|
|
if config.telemetry.stats.network_bandwidth:
|
|
stats_tasks.append(asyncio.create_task(set_bandwidth_stats(config, stats)))
|
|
|
|
await asyncio.wait(stats_tasks)
|
|
|
|
loop = asyncio.new_event_loop()
|
|
asyncio.set_event_loop(loop)
|
|
loop.run_until_complete(run_tasks())
|
|
loop.close()
|
|
|
|
|
|
async def set_cpu_stats(all_stats: dict[str, Any]) -> None:
|
|
"""Set cpu usage from top."""
|
|
cpu_stats = get_cpu_stats()
|
|
|
|
if cpu_stats:
|
|
all_stats["cpu_usages"] = cpu_stats
|
|
|
|
|
|
async def set_bandwidth_stats(config: FrigateConfig, all_stats: dict[str, Any]) -> None:
|
|
"""Set bandwidth from nethogs."""
|
|
bandwidth_stats = get_bandwidth_stats(config)
|
|
|
|
if bandwidth_stats:
|
|
all_stats["bandwidth_usages"] = bandwidth_stats
|
|
|
|
|
|
async def set_gpu_stats(
|
|
config: FrigateConfig, all_stats: dict[str, Any], hwaccel_errors: list[str]
|
|
) -> None:
|
|
"""Parse GPUs from hwaccel args and use for stats."""
|
|
hwaccel_args = []
|
|
|
|
for camera in config.cameras.values():
|
|
args = camera.ffmpeg.hwaccel_args
|
|
|
|
if isinstance(args, list):
|
|
args = " ".join(args)
|
|
|
|
if args and args not in hwaccel_args:
|
|
hwaccel_args.append(args)
|
|
|
|
for stream_input in camera.ffmpeg.inputs:
|
|
args = stream_input.hwaccel_args
|
|
|
|
if isinstance(args, list):
|
|
args = " ".join(args)
|
|
|
|
if args and args not in hwaccel_args:
|
|
hwaccel_args.append(args)
|
|
|
|
stats: dict[str, dict] = {}
|
|
|
|
for args in hwaccel_args:
|
|
if args in hwaccel_errors:
|
|
# known erroring args should automatically return as error
|
|
stats["error-gpu"] = {"gpu": "", "mem": ""}
|
|
elif "cuvid" in args or "nvidia" in args:
|
|
# nvidia GPU
|
|
nvidia_usage = get_nvidia_gpu_stats()
|
|
|
|
if nvidia_usage:
|
|
for i in range(len(nvidia_usage)):
|
|
stats[nvidia_usage[i]["name"]] = {
|
|
"gpu": str(round(float(nvidia_usage[i]["gpu"]), 2)) + "%",
|
|
"mem": str(round(float(nvidia_usage[i]["mem"]), 2)) + "%",
|
|
"enc": str(round(float(nvidia_usage[i]["enc"]), 2)) + "%",
|
|
"dec": str(round(float(nvidia_usage[i]["dec"]), 2)) + "%",
|
|
"temp": str(nvidia_usage[i]["temp"]),
|
|
}
|
|
|
|
else:
|
|
stats["nvidia-gpu"] = {"gpu": "", "mem": ""}
|
|
hwaccel_errors.append(args)
|
|
elif "nvmpi" in args or "jetson" in args:
|
|
# nvidia Jetson
|
|
jetson_usage = get_jetson_stats()
|
|
|
|
if jetson_usage:
|
|
stats["jetson-gpu"] = jetson_usage
|
|
else:
|
|
stats["jetson-gpu"] = {"gpu": "", "mem": ""}
|
|
hwaccel_errors.append(args)
|
|
elif "qsv" in args:
|
|
if not config.telemetry.stats.intel_gpu_stats:
|
|
continue
|
|
|
|
# intel QSV GPU
|
|
intel_usage = get_intel_gpu_stats(config.telemetry.stats.intel_gpu_device)
|
|
|
|
if intel_usage is not None:
|
|
stats["intel-qsv"] = intel_usage or {"gpu": "", "mem": ""}
|
|
else:
|
|
stats["intel-qsv"] = {"gpu": "", "mem": ""}
|
|
hwaccel_errors.append(args)
|
|
elif "vaapi" in args:
|
|
if is_vaapi_amd_driver():
|
|
if not config.telemetry.stats.amd_gpu_stats:
|
|
continue
|
|
|
|
# AMD VAAPI GPU
|
|
amd_usage = get_amd_gpu_stats()
|
|
|
|
if amd_usage:
|
|
stats["amd-vaapi"] = amd_usage
|
|
else:
|
|
stats["amd-vaapi"] = {"gpu": "", "mem": ""}
|
|
hwaccel_errors.append(args)
|
|
else:
|
|
if not config.telemetry.stats.intel_gpu_stats:
|
|
continue
|
|
|
|
# intel VAAPI GPU
|
|
intel_usage = get_intel_gpu_stats(
|
|
config.telemetry.stats.intel_gpu_device
|
|
)
|
|
|
|
if intel_usage is not None:
|
|
stats["intel-vaapi"] = intel_usage or {"gpu": "", "mem": ""}
|
|
else:
|
|
stats["intel-vaapi"] = {"gpu": "", "mem": ""}
|
|
hwaccel_errors.append(args)
|
|
elif "preset-rk" in args:
|
|
rga_usage = get_rockchip_gpu_stats()
|
|
|
|
if rga_usage:
|
|
stats["rockchip"] = rga_usage
|
|
elif "v4l2m2m" in args or "rpi" in args:
|
|
# RPi v4l2m2m is currently not able to get usage stats
|
|
stats["rpi-v4l2m2m"] = {"gpu": "", "mem": ""}
|
|
|
|
if stats:
|
|
all_stats["gpu_usages"] = stats
|
|
|
|
|
|
async def set_npu_usages(config: FrigateConfig, all_stats: dict[str, Any]) -> None:
|
|
stats: dict[str, dict] = {}
|
|
|
|
for detector in config.detectors.values():
|
|
if detector.type == "rknn":
|
|
# Rockchip NPU usage
|
|
rk_usage = get_rockchip_npu_stats()
|
|
stats["rockchip"] = rk_usage
|
|
elif detector.type == "openvino" and detector.device == "NPU":
|
|
# OpenVINO NPU usage
|
|
ov_usage = get_openvino_npu_stats()
|
|
stats["openvino"] = ov_usage
|
|
|
|
if stats:
|
|
all_stats["npu_usages"] = stats
|
|
|
|
|
|
def stats_snapshot(
|
|
config: FrigateConfig, stats_tracking: StatsTrackingTypes, hwaccel_errors: list[str]
|
|
) -> dict[str, Any]:
|
|
"""Get a snapshot of the current stats that are being tracked."""
|
|
camera_metrics = stats_tracking["camera_metrics"]
|
|
stats: dict[str, Any] = {}
|
|
|
|
total_camera_fps = total_process_fps = total_skipped_fps = total_detection_fps = 0
|
|
|
|
stats["cameras"] = {}
|
|
for name, camera_stats in camera_metrics.items():
|
|
total_camera_fps += camera_stats.camera_fps.value
|
|
total_process_fps += camera_stats.process_fps.value
|
|
total_skipped_fps += camera_stats.skipped_fps.value
|
|
total_detection_fps += camera_stats.detection_fps.value
|
|
pid = camera_stats.process_pid.value if camera_stats.process_pid.value else None
|
|
ffmpeg_pid = camera_stats.ffmpeg_pid.value if camera_stats.ffmpeg_pid else None
|
|
capture_pid = (
|
|
camera_stats.capture_process_pid.value
|
|
if camera_stats.capture_process_pid.value
|
|
else None
|
|
)
|
|
# Calculate connection quality based on current state
|
|
# This is computed at stats-collection time so offline cameras
|
|
# correctly show as unusable rather than excellent
|
|
expected_fps = config.cameras[name].detect.fps
|
|
current_fps = camera_stats.camera_fps.value
|
|
reconnects = camera_stats.reconnects_last_hour.value
|
|
stalls = camera_stats.stalls_last_hour.value
|
|
|
|
if current_fps < 0.1:
|
|
quality_str = "unusable"
|
|
elif reconnects == 0 and current_fps >= 0.9 * expected_fps and stalls < 5:
|
|
quality_str = "excellent"
|
|
elif reconnects <= 2 and current_fps >= 0.6 * expected_fps:
|
|
quality_str = "fair"
|
|
elif reconnects > 10 or current_fps < 1.0 or stalls > 100:
|
|
quality_str = "unusable"
|
|
else:
|
|
quality_str = "poor"
|
|
|
|
connection_quality = {
|
|
"connection_quality": quality_str,
|
|
"expected_fps": expected_fps,
|
|
"reconnects_last_hour": reconnects,
|
|
"stalls_last_hour": stalls,
|
|
}
|
|
|
|
stats["cameras"][name] = {
|
|
"camera_fps": round(camera_stats.camera_fps.value, 2),
|
|
"process_fps": round(camera_stats.process_fps.value, 2),
|
|
"skipped_fps": round(camera_stats.skipped_fps.value, 2),
|
|
"detection_fps": round(camera_stats.detection_fps.value, 2),
|
|
"detection_enabled": config.cameras[name].detect.enabled,
|
|
"pid": pid,
|
|
"capture_pid": capture_pid,
|
|
"ffmpeg_pid": ffmpeg_pid,
|
|
"audio_rms": round(camera_stats.audio_rms.value, 4),
|
|
"audio_dBFS": round(camera_stats.audio_dBFS.value, 4),
|
|
**connection_quality,
|
|
}
|
|
|
|
stats["detectors"] = get_detector_stats(stats_tracking)
|
|
stats["camera_fps"] = round(total_camera_fps, 2)
|
|
stats["process_fps"] = round(total_process_fps, 2)
|
|
stats["skipped_fps"] = round(total_skipped_fps, 2)
|
|
stats["detection_fps"] = round(total_detection_fps, 2)
|
|
|
|
stats["embeddings"] = {}
|
|
|
|
# Get metrics if available
|
|
embeddings_metrics = stats_tracking.get("embeddings_metrics")
|
|
|
|
if embeddings_metrics:
|
|
# Add metrics based on what's enabled
|
|
if config.semantic_search.enabled:
|
|
stats["embeddings"].update(
|
|
{
|
|
"image_embedding_speed": round(
|
|
embeddings_metrics.image_embeddings_speed.value * 1000, 2
|
|
),
|
|
"image_embedding": round(
|
|
embeddings_metrics.image_embeddings_eps.value, 2
|
|
),
|
|
"text_embedding_speed": round(
|
|
embeddings_metrics.text_embeddings_speed.value * 1000, 2
|
|
),
|
|
"text_embedding": round(
|
|
embeddings_metrics.text_embeddings_eps.value, 2
|
|
),
|
|
}
|
|
)
|
|
|
|
if config.face_recognition.enabled:
|
|
stats["embeddings"]["face_recognition_speed"] = round(
|
|
embeddings_metrics.face_rec_speed.value * 1000, 2
|
|
)
|
|
stats["embeddings"]["face_recognition"] = round(
|
|
embeddings_metrics.face_rec_fps.value, 2
|
|
)
|
|
|
|
if config.lpr.enabled:
|
|
stats["embeddings"]["plate_recognition_speed"] = round(
|
|
embeddings_metrics.alpr_speed.value * 1000, 2
|
|
)
|
|
stats["embeddings"]["plate_recognition"] = round(
|
|
embeddings_metrics.alpr_pps.value, 2
|
|
)
|
|
|
|
if embeddings_metrics.yolov9_lpr_pps.value > 0.0:
|
|
stats["embeddings"]["yolov9_plate_detection_speed"] = round(
|
|
embeddings_metrics.yolov9_lpr_speed.value * 1000, 2
|
|
)
|
|
stats["embeddings"]["yolov9_plate_detection"] = round(
|
|
embeddings_metrics.yolov9_lpr_pps.value, 2
|
|
)
|
|
|
|
if embeddings_metrics.review_desc_speed.value > 0.0:
|
|
stats["embeddings"]["review_description_speed"] = round(
|
|
embeddings_metrics.review_desc_speed.value * 1000, 2
|
|
)
|
|
stats["embeddings"]["review_description_events_per_second"] = round(
|
|
embeddings_metrics.review_desc_dps.value, 2
|
|
)
|
|
|
|
if embeddings_metrics.object_desc_speed.value > 0.0:
|
|
stats["embeddings"]["object_description_speed"] = round(
|
|
embeddings_metrics.object_desc_speed.value * 1000, 2
|
|
)
|
|
stats["embeddings"]["object_description_events_per_second"] = round(
|
|
embeddings_metrics.object_desc_dps.value, 2
|
|
)
|
|
|
|
for key in embeddings_metrics.classification_speeds.keys():
|
|
stats["embeddings"][f"{key}_classification_speed"] = round(
|
|
embeddings_metrics.classification_speeds[key].value * 1000, 2
|
|
)
|
|
stats["embeddings"][f"{key}_classification_events_per_second"] = round(
|
|
embeddings_metrics.classification_cps[key].value, 2
|
|
)
|
|
|
|
get_processing_stats(config, stats, hwaccel_errors)
|
|
|
|
stats["service"] = {
|
|
"uptime": (int(time.time()) - stats_tracking["started"]),
|
|
"version": VERSION,
|
|
"latest_version": stats_tracking["latest_frigate_version"],
|
|
"storage": {},
|
|
"last_updated": int(time.time()),
|
|
}
|
|
|
|
for path in [RECORD_DIR, CLIPS_DIR, CACHE_DIR]:
|
|
try:
|
|
storage_stats = shutil.disk_usage(path)
|
|
except (FileNotFoundError, OSError):
|
|
stats["service"]["storage"][path] = {}
|
|
continue
|
|
|
|
stats["service"]["storage"][path] = {
|
|
"total": round(storage_stats.total / pow(2, 20), 1),
|
|
"used": round(storage_stats.used / pow(2, 20), 1),
|
|
"free": round(storage_stats.free / pow(2, 20), 1),
|
|
"mount_type": get_fs_type(path),
|
|
}
|
|
|
|
stats["service"]["storage"]["/dev/shm"] = calculate_shm_requirements(config)
|
|
|
|
stats["processes"] = {}
|
|
for name, pid in stats_tracking["processes"].items():
|
|
stats["processes"][name] = {
|
|
"pid": pid,
|
|
}
|
|
|
|
return stats
|