Merge remote-tracking branch 'origin/master' into dev

This commit is contained in:
Blake Blackshear 2025-09-04 06:33:22 -05:00
commit a8b7e5dd24
32 changed files with 587 additions and 448 deletions

View File

@ -70,9 +70,16 @@ fi
# arch specific packages # arch specific packages
if [[ "${TARGETARCH}" == "amd64" ]]; then if [[ "${TARGETARCH}" == "amd64" ]]; then
# Install non-free version of i965 driver
sed -i -E "/^Components: main$/s/main/main contrib non-free non-free-firmware/" "/etc/apt/sources.list.d/debian.sources" \
&& apt-get -qq update \
&& apt-get install --no-install-recommends --no-install-suggests -y i965-va-driver-shaders \
&& sed -i -E "/^Components: main contrib non-free non-free-firmware$/s/main contrib non-free non-free-firmware/main/" "/etc/apt/sources.list.d/debian.sources" \
&& apt-get update
# install amd / intel-i965 driver packages # install amd / intel-i965 driver packages
apt-get -qq install --no-install-recommends --no-install-suggests -y \ apt-get -qq install --no-install-recommends --no-install-suggests -y \
i965-va-driver intel-gpu-tools onevpl-tools \ intel-gpu-tools onevpl-tools \
libva-drm2 \ libva-drm2 \
mesa-va-drivers radeontop mesa-va-drivers radeontop

View File

@ -144,7 +144,13 @@ WEB Digest Algorithm - MD5
### Reolink Cameras ### Reolink Cameras
Reolink has older cameras (ex: 410 & 520) as well as newer camera (ex: 520a & 511wa) which support different subsets of options. In both cases using the http stream is recommended. Reolink has many different camera models with inconsistently supported features and behavior. The below table shows a summary of various features and recommendations.
| Camera Resolution | Camera Generation | Recommended Stream Type | Additional Notes |
| 5MP or lower | All | http-flv | Stream is h264 |
| 6MP or higher | Latest (ex: Duo3, CX-8##) | http-flv with ffmpeg 8.0, or rtsp | This uses the new http-flv-enhanced over H265 which requires ffmpeg 8.0 |
| 6MP or higher | Older (ex: RLC-8##) | rtsp | |
Frigate works much better with newer reolink cameras that are setup with the below options: Frigate works much better with newer reolink cameras that are setup with the below options:
If available, recommended settings are: If available, recommended settings are:
@ -157,12 +163,6 @@ According to [this discussion](https://github.com/blakeblackshear/frigate/issues
Cameras connected via a Reolink NVR can be connected with the http stream, use `channel[0..15]` in the stream url for the additional channels. Cameras connected via a Reolink NVR can be connected with the http stream, use `channel[0..15]` in the stream url for the additional channels.
The setup of main stream can be also done via RTSP, but isn't always reliable on all hardware versions. The example configuration is working with the oldest HW version RLN16-410 device with multiple types of cameras. The setup of main stream can be also done via RTSP, but isn't always reliable on all hardware versions. The example configuration is working with the oldest HW version RLN16-410 device with multiple types of cameras.
:::warning
The below configuration only works for reolink cameras with stream resolution of 5MP or lower, 8MP+ cameras need to use RTSP as http-flv is not supported in this case.
:::
```yaml ```yaml
go2rtc: go2rtc:
streams: streams:
@ -259,7 +259,7 @@ To use a USB camera (webcam) with Frigate, the recommendation is to use go2rtc's
go2rtc: go2rtc:
streams: streams:
usb_camera: usb_camera:
- "ffmpeg:device?video=0&video_size=1024x576#video=h264" - "ffmpeg:device?video=0&video_size=1024x576#video=h264"
cameras: cameras:
usb_camera: usb_camera:

View File

@ -111,10 +111,7 @@ The FeatureList on the [ONVIF Conformant Products Database](https://www.onvif.or
| Hanwha XNP-6550RH | ✅ | ❌ | | | Hanwha XNP-6550RH | ✅ | ❌ | |
| Hikvision | ✅ | ❌ | Incomplete ONVIF support (MoveStatus won't update even on latest firmware) - reported with HWP-N4215IH-DE and DS-2DE3304W-DE, but likely others | | Hikvision | ✅ | ❌ | Incomplete ONVIF support (MoveStatus won't update even on latest firmware) - reported with HWP-N4215IH-DE and DS-2DE3304W-DE, but likely others |
| Hikvision DS-2DE3A404IWG-E/W | ✅ | ✅ | | | Hikvision DS-2DE3A404IWG-E/W | ✅ | ✅ | |
| Reolink 511WA | ✅ | ❌ | Zoom only | | Reolink | ✅ | ❌ | |
| Reolink E1 Pro | ✅ | ❌ | |
| Reolink E1 Zoom | ✅ | ❌ | |
| Reolink RLC-823A 16x | ✅ | ❌ | |
| Speco O8P32X | ✅ | ❌ | | | Speco O8P32X | ✅ | ❌ | |
| Sunba 405-D20X | ✅ | ❌ | Incomplete ONVIF support reported on original, and 4k models. All models are suspected incompatable. | | Sunba 405-D20X | ✅ | ❌ | Incomplete ONVIF support reported on original, and 4k models. All models are suspected incompatable. |
| Tapo | ✅ | ❌ | Many models supported, ONVIF Service Port: 2020 | | Tapo | ✅ | ❌ | Many models supported, ONVIF Service Port: 2020 |

View File

@ -21,8 +21,7 @@ See [the hwaccel docs](/configuration/hardware_acceleration_video.md) for more i
| preset-nvidia | Nvidia GPU | | | preset-nvidia | Nvidia GPU | |
| preset-jetson-h264 | Nvidia Jetson with h264 stream | | | preset-jetson-h264 | Nvidia Jetson with h264 stream | |
| preset-jetson-h265 | Nvidia Jetson with h265 stream | | | preset-jetson-h265 | Nvidia Jetson with h265 stream | |
| preset-rk-h264 | Rockchip MPP with h264 stream | Use image with \*-rk suffix and privileged mode | | preset-rkmpp | Rockchip MPP | Use image with \*-rk suffix and privileged mode |
| preset-rk-h265 | Rockchip MPP with h265 stream | Use image with \*-rk suffix and privileged mode |
### Input Args Presets ### Input Args Presets

View File

@ -9,7 +9,6 @@ It is highly recommended to use a GPU for hardware acceleration video decoding i
Depending on your system, these parameters may not be compatible. More information on hardware accelerated decoding for ffmpeg can be found here: https://trac.ffmpeg.org/wiki/HWAccelIntro Depending on your system, these parameters may not be compatible. More information on hardware accelerated decoding for ffmpeg can be found here: https://trac.ffmpeg.org/wiki/HWAccelIntro
# Object Detection
## Raspberry Pi 3/4 ## Raspberry Pi 3/4
@ -229,7 +228,7 @@ Additional configuration is needed for the Docker container to be able to access
services: services:
frigate: frigate:
... ...
image: ghcr.io/blakeblackshear/frigate:stable image: ghcr.io/blakeblackshear/frigate:stable-tensorrt
deploy: # <------------- Add this section deploy: # <------------- Add this section
resources: resources:
reservations: reservations:
@ -247,7 +246,7 @@ docker run -d \
--name frigate \ --name frigate \
... ...
--gpus=all \ --gpus=all \
ghcr.io/blakeblackshear/frigate:stable ghcr.io/blakeblackshear/frigate:stable-tensorrt
``` ```
### Setup Decoder ### Setup Decoder

View File

@ -499,14 +499,13 @@ Also AMD/ROCm does not "officially" support integrated GPUs. It still does work
For the rocm frigate build there is some automatic detection: For the rocm frigate build there is some automatic detection:
- gfx90c -> 9.0.0
- gfx1031 -> 10.3.0 - gfx1031 -> 10.3.0
- gfx1103 -> 11.0.0 - gfx1103 -> 11.0.0
If you have something else you might need to override the `HSA_OVERRIDE_GFX_VERSION` at Docker launch. Suppose the version you want is `9.0.0`, then you should configure it from command line as: If you have something else you might need to override the `HSA_OVERRIDE_GFX_VERSION` at Docker launch. Suppose the version you want is `10.0.0`, then you should configure it from command line as:
```bash ```bash
$ docker run -e HSA_OVERRIDE_GFX_VERSION=9.0.0 \ $ docker run -e HSA_OVERRIDE_GFX_VERSION=10.0.0 \
... ...
``` ```
@ -517,7 +516,7 @@ services:
frigate: frigate:
environment: environment:
HSA_OVERRIDE_GFX_VERSION: "9.0.0" HSA_OVERRIDE_GFX_VERSION: "10.0.0"
``` ```
Figuring out what version you need can be complicated as you can't tell the chipset name and driver from the AMD brand name. Figuring out what version you need can be complicated as you can't tell the chipset name and driver from the AMD brand name.

View File

@ -43,7 +43,7 @@ The following ports are used by Frigate and can be mapped via docker as required
| `8971` | Authenticated UI and API access without TLS. Reverse proxies should use this port. | | `8971` | Authenticated UI and API access without TLS. Reverse proxies should use this port. |
| `5000` | Internal unauthenticated UI and API access. Access to this port should be limited. Intended to be used within the docker network for services that integrate with Frigate. | | `5000` | Internal unauthenticated UI and API access. Access to this port should be limited. Intended to be used within the docker network for services that integrate with Frigate. |
| `8554` | RTSP restreaming. By default, these streams are unauthenticated. Authentication can be configured in go2rtc section of config. | | `8554` | RTSP restreaming. By default, these streams are unauthenticated. Authentication can be configured in go2rtc section of config. |
| `8555` | WebRTC connections for low latency live views. | | `8555` | WebRTC connections for cameras with two-way talk support. |
#### Common Docker Compose storage configurations #### Common Docker Compose storage configurations

View File

@ -15,10 +15,10 @@ At a high level, there are five processing steps that could be applied to a came
%%{init: {"themeVariables": {"edgeLabelBackground": "transparent"}}}%% %%{init: {"themeVariables": {"edgeLabelBackground": "transparent"}}}%%
flowchart LR flowchart LR
Feed(Feed\nacquisition) --> Decode(Video\ndecoding) Feed(Feed acquisition) --> Decode(Video decoding)
Decode --> Motion(Motion\ndetection) Decode --> Motion(Motion detection)
Motion --> Object(Object\ndetection) Motion --> Object(Object detection)
Feed --> Recording(Recording\nand\nvisualization) Feed --> Recording(Recording and visualization)
Motion --> Recording Motion --> Recording
Object --> Recording Object --> Recording
``` ```

View File

@ -114,7 +114,7 @@ section.
## Next steps ## Next steps
1. If the stream you added to go2rtc is also used by Frigate for the `record` or `detect` role, you can migrate your config to pull from the RTSP restream to reduce the number of connections to your camera as shown [here](/configuration/restream#reduce-connections-to-camera). 1. If the stream you added to go2rtc is also used by Frigate for the `record` or `detect` role, you can migrate your config to pull from the RTSP restream to reduce the number of connections to your camera as shown [here](/configuration/restream#reduce-connections-to-camera).
2. You may also prefer to [setup WebRTC](/configuration/live#webrtc-extra-configuration) for slightly lower latency than MSE. Note that WebRTC only supports h264 and specific audio formats and may require opening ports on your router. 2. You can [set up WebRTC](/configuration/live#webrtc-extra-configuration) if your camera supports two-way talk. Note that WebRTC only supports specific audio formats and may require opening ports on your router.
## Important considerations ## Important considerations

View File

@ -1759,6 +1759,10 @@ paths:
- name: include_thumbnails - name: include_thumbnails
in: query in: query
required: false required: false
description: >
Deprecated. Thumbnail data is no longer included in the response.
Use the /api/events/:event_id/thumbnail.:extension endpoint instead.
deprecated: true
schema: schema:
anyOf: anyOf:
- type: integer - type: integer
@ -1973,6 +1977,10 @@ paths:
- name: include_thumbnails - name: include_thumbnails
in: query in: query
required: false required: false
description: >
Deprecated. Thumbnail data is no longer included in the response.
Use the /api/events/:event_id/thumbnail.:extension endpoint instead.
deprecated: true
schema: schema:
anyOf: anyOf:
- type: integer - type: integer

View File

@ -218,7 +218,7 @@ async def register_face(request: Request, name: str, file: UploadFile):
) )
context: EmbeddingsContext = request.app.embeddings context: EmbeddingsContext = request.app.embeddings
result = context.register_face(name, await file.read()) result = None if context is None else context.register_face(name, await file.read())
if not isinstance(result, dict): if not isinstance(result, dict):
return JSONResponse( return JSONResponse(

View File

@ -1,6 +1,6 @@
from typing import Optional from typing import Optional
from pydantic import BaseModel from pydantic import BaseModel, Field
DEFAULT_TIME_RANGE = "00:00,24:00" DEFAULT_TIME_RANGE = "00:00,24:00"
@ -21,7 +21,14 @@ class EventsQueryParams(BaseModel):
has_clip: Optional[int] = None has_clip: Optional[int] = None
has_snapshot: Optional[int] = None has_snapshot: Optional[int] = None
in_progress: Optional[int] = None in_progress: Optional[int] = None
include_thumbnails: Optional[int] = 1 include_thumbnails: Optional[int] = Field(
1,
description=(
"Deprecated. Thumbnail data is no longer included in the response. "
"Use the /api/events/:event_id/thumbnail.:extension endpoint instead."
),
deprecated=True,
)
favorites: Optional[int] = None favorites: Optional[int] = None
min_score: Optional[float] = None min_score: Optional[float] = None
max_score: Optional[float] = None max_score: Optional[float] = None
@ -40,7 +47,14 @@ class EventsSearchQueryParams(BaseModel):
query: Optional[str] = None query: Optional[str] = None
event_id: Optional[str] = None event_id: Optional[str] = None
search_type: Optional[str] = "thumbnail" search_type: Optional[str] = "thumbnail"
include_thumbnails: Optional[int] = 1 include_thumbnails: Optional[int] = Field(
1,
description=(
"Deprecated. Thumbnail data is no longer included in the response. "
"Use the /api/events/:event_id/thumbnail.:extension endpoint instead."
),
deprecated=True,
)
limit: Optional[int] = 50 limit: Optional[int] = 50
cameras: Optional[str] = "all" cameras: Optional[str] = "all"
labels: Optional[str] = "all" labels: Optional[str] = "all"

View File

@ -11,6 +11,11 @@ class Extension(str, Enum):
jpg = "jpg" jpg = "jpg"
jpeg = "jpeg" jpeg = "jpeg"
def get_mime_type(self) -> str:
if self in (Extension.jpg, Extension.jpeg):
return "image/jpeg"
return f"image/{self.value}"
class MediaLatestFrameQueryParams(BaseModel): class MediaLatestFrameQueryParams(BaseModel):
bbox: Optional[int] = None bbox: Optional[int] = None

View File

@ -145,15 +145,13 @@ def latest_frame(
"regions": params.regions, "regions": params.regions,
} }
quality = params.quality quality = params.quality
mime_type = extension
if extension == "png": if extension == Extension.png:
quality_params = None quality_params = None
elif extension == "webp": elif extension == Extension.webp:
quality_params = [int(cv2.IMWRITE_WEBP_QUALITY), quality] quality_params = [int(cv2.IMWRITE_WEBP_QUALITY), quality]
else: else: # jpg or jpeg
quality_params = [int(cv2.IMWRITE_JPEG_QUALITY), quality] quality_params = [int(cv2.IMWRITE_JPEG_QUALITY), quality]
mime_type = "jpeg"
if camera_name in request.app.frigate_config.cameras: if camera_name in request.app.frigate_config.cameras:
frame = frame_processor.get_current_frame(camera_name, draw_options) frame = frame_processor.get_current_frame(camera_name, draw_options)
@ -196,18 +194,21 @@ def latest_frame(
frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA) frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA)
_, img = cv2.imencode(f".{extension}", frame, quality_params) _, img = cv2.imencode(f".{extension.value}", frame, quality_params)
return Response( return Response(
content=img.tobytes(), content=img.tobytes(),
media_type=f"image/{mime_type}", media_type=extension.get_mime_type(),
headers={ headers={
"Content-Type": f"image/{mime_type}",
"Cache-Control": "no-store" "Cache-Control": "no-store"
if not params.store if not params.store
else "private, max-age=60", else "private, max-age=60",
}, },
) )
elif camera_name == "birdseye" and request.app.frigate_config.birdseye.restream: elif (
camera_name == "birdseye"
and request.app.frigate_config.birdseye.enabled
and request.app.frigate_config.birdseye.restream
):
frame = cv2.cvtColor( frame = cv2.cvtColor(
frame_processor.get_current_frame(camera_name), frame_processor.get_current_frame(camera_name),
cv2.COLOR_YUV2BGR_I420, cv2.COLOR_YUV2BGR_I420,
@ -218,12 +219,11 @@ def latest_frame(
frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA) frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA)
_, img = cv2.imencode(f".{extension}", frame, quality_params) _, img = cv2.imencode(f".{extension.value}", frame, quality_params)
return Response( return Response(
content=img.tobytes(), content=img.tobytes(),
media_type=f"image/{mime_type}", media_type=extension.get_mime_type(),
headers={ headers={
"Content-Type": f"image/{mime_type}",
"Cache-Control": "no-store" "Cache-Control": "no-store"
if not params.store if not params.store
else "private, max-age=60", else "private, max-age=60",
@ -812,7 +812,10 @@ def vod_hour(year_month: str, day: int, hour: int, camera_name: str, tz_name: st
"/vod/event/{event_id}", "/vod/event/{event_id}",
description="Returns an HLS playlist for the specified object. Append /master.m3u8 or /index.m3u8 for HLS playback.", description="Returns an HLS playlist for the specified object. Append /master.m3u8 or /index.m3u8 for HLS playback.",
) )
def vod_event(event_id: str): def vod_event(
event_id: str,
padding: int = Query(0, description="Padding to apply to the vod."),
):
try: try:
event: Event = Event.get(Event.id == event_id) event: Event = Event.get(Event.id == event_id)
except DoesNotExist: except DoesNotExist:
@ -835,32 +838,23 @@ def vod_event(event_id: str):
status_code=404, status_code=404,
) )
clip_path = os.path.join(CLIPS_DIR, f"{event.camera}-{event.id}.mp4") end_ts = (
datetime.now().timestamp()
if not os.path.isfile(clip_path): if event.end_time is None
end_ts = ( else (event.end_time + padding)
datetime.now().timestamp() if event.end_time is None else event.end_time
)
vod_response = vod_ts(event.camera, event.start_time, end_ts)
# If the recordings are not found and the event started more than 5 minutes ago, set has_clip to false
if (
event.start_time < datetime.now().timestamp() - 300
and type(vod_response) is tuple
and len(vod_response) == 2
and vod_response[1] == 404
):
Event.update(has_clip=False).where(Event.id == event_id).execute()
return vod_response
duration = int((event.end_time - event.start_time) * 1000)
return JSONResponse(
content={
"cache": True,
"discontinuity": False,
"durations": [duration],
"sequences": [{"clips": [{"type": "source", "path": clip_path}]}],
}
) )
vod_response = vod_ts(event.camera, event.start_time - padding, end_ts)
# If the recordings are not found and the event started more than 5 minutes ago, set has_clip to false
if (
event.start_time < datetime.now().timestamp() - 300
and type(vod_response) is tuple
and len(vod_response) == 2
and vod_response[1] == 404
):
Event.update(has_clip=False).where(Event.id == event_id).execute()
return vod_response
@router.get( @router.get(
@ -941,7 +935,7 @@ def event_snapshot(
def event_thumbnail( def event_thumbnail(
request: Request, request: Request,
event_id: str, event_id: str,
extension: str, extension: Extension,
max_cache_age: int = Query( max_cache_age: int = Query(
2592000, description="Max cache age in seconds. Default 30 days in seconds." 2592000, description="Max cache age in seconds. Default 30 days in seconds."
), ),
@ -966,7 +960,7 @@ def event_thumbnail(
if event_id in camera_state.tracked_objects: if event_id in camera_state.tracked_objects:
tracked_obj = camera_state.tracked_objects.get(event_id) tracked_obj = camera_state.tracked_objects.get(event_id)
if tracked_obj is not None: if tracked_obj is not None:
thumbnail_bytes = tracked_obj.get_thumbnail(extension) thumbnail_bytes = tracked_obj.get_thumbnail(extension.value)
except Exception: except Exception:
return JSONResponse( return JSONResponse(
content={"success": False, "message": "Event not found"}, content={"success": False, "message": "Event not found"},
@ -994,23 +988,21 @@ def event_thumbnail(
) )
quality_params = None quality_params = None
if extension in (Extension.jpg, Extension.jpeg):
if extension == "jpg" or extension == "jpeg":
quality_params = [int(cv2.IMWRITE_JPEG_QUALITY), 70] quality_params = [int(cv2.IMWRITE_JPEG_QUALITY), 70]
elif extension == "webp": elif extension == Extension.webp:
quality_params = [int(cv2.IMWRITE_WEBP_QUALITY), 60] quality_params = [int(cv2.IMWRITE_WEBP_QUALITY), 60]
_, img = cv2.imencode(f".{extension}", thumbnail, quality_params) _, img = cv2.imencode(f".{extension.value}", thumbnail, quality_params)
thumbnail_bytes = img.tobytes() thumbnail_bytes = img.tobytes()
return Response( return Response(
thumbnail_bytes, thumbnail_bytes,
media_type=f"image/{extension}", media_type=extension.get_mime_type(),
headers={ headers={
"Cache-Control": f"private, max-age={max_cache_age}" "Cache-Control": f"private, max-age={max_cache_age}"
if event_complete if event_complete
else "no-store", else "no-store",
"Content-Type": f"image/{extension}",
}, },
) )
@ -1221,7 +1213,11 @@ def event_snapshot_clean(request: Request, event_id: str, download: bool = False
@router.get("/events/{event_id}/clip.mp4") @router.get("/events/{event_id}/clip.mp4")
def event_clip(request: Request, event_id: str): def event_clip(
request: Request,
event_id: str,
padding: int = Query(0, description="Padding to apply to clip."),
):
try: try:
event: Event = Event.get(Event.id == event_id) event: Event = Event.get(Event.id == event_id)
except DoesNotExist: except DoesNotExist:
@ -1234,8 +1230,12 @@ def event_clip(request: Request, event_id: str):
content={"success": False, "message": "Clip not available"}, status_code=404 content={"success": False, "message": "Clip not available"}, status_code=404
) )
end_ts = datetime.now().timestamp() if event.end_time is None else event.end_time end_ts = (
return recording_clip(request, event.camera, event.start_time, end_ts) datetime.now().timestamp()
if event.end_time is None
else event.end_time + padding
)
return recording_clip(request, event.camera, event.start_time - padding, end_ts)
@router.get("/events/{event_id}/preview.gif") @router.get("/events/{event_id}/preview.gif")

View File

@ -61,6 +61,7 @@ class FfmpegConfig(FrigateBaseModel):
retry_interval: float = Field( retry_interval: float = Field(
default=10.0, default=10.0,
title="Time in seconds to wait before FFmpeg retries connecting to the camera.", title="Time in seconds to wait before FFmpeg retries connecting to the camera.",
gt=0.0,
) )
apple_compatibility: bool = Field( apple_compatibility: bool = Field(
default=False, default=False,

View File

@ -158,6 +158,9 @@ class ModelConfig(BaseModel):
self.input_pixel_format = model_info["pixelFormat"] self.input_pixel_format = model_info["pixelFormat"]
self.model_type = model_info["type"] self.model_type = model_info["type"]
if model_info.get("inputDataType"):
self.input_dtype = model_info["inputDataType"]
# generate list of attribute labels # generate list of attribute labels
self.attributes_map = { self.attributes_map = {
**model_info.get("attributes", DEFAULT_ATTRIBUTE_LABEL_MAP), **model_info.get("attributes", DEFAULT_ATTRIBUTE_LABEL_MAP),

View File

@ -182,10 +182,15 @@ Rules:
event: Event, event: Event,
) -> Optional[str]: ) -> Optional[str]:
"""Generate a description for the frame.""" """Generate a description for the frame."""
prompt = camera_config.objects.genai.object_prompts.get( try:
event.label, prompt = camera_config.genai.object_prompts.get(
camera_config.objects.genai.prompt, event.label,
).format(**model_to_dict(event)) camera_config.genai.prompt,
).format(**model_to_dict(event))
except KeyError as e:
logger.error(f"Invalid key in GenAI prompt: {e}")
return None
logger.debug(f"Sending images to genai provider with prompt: {prompt}") logger.debug(f"Sending images to genai provider with prompt: {prompt}")
return self._send(prompt, thumbnails) return self._send(prompt, thumbnails)

View File

@ -372,12 +372,13 @@ class PtzAutoTracker:
logger.info(f"Camera calibration for {camera} in progress") logger.info(f"Camera calibration for {camera} in progress")
# zoom levels test # zoom levels test
self.zoom_time[camera] = 0
if ( if (
self.config.cameras[camera].onvif.autotracking.zooming self.config.cameras[camera].onvif.autotracking.zooming
!= ZoomingModeEnum.disabled != ZoomingModeEnum.disabled
): ):
logger.info(f"Calibration for {camera} in progress: 0% complete") logger.info(f"Calibration for {camera} in progress: 0% complete")
self.zoom_time[camera] = 0
for i in range(2): for i in range(2):
# absolute move to 0 - fully zoomed out # absolute move to 0 - fully zoomed out
@ -1332,7 +1333,11 @@ class PtzAutoTracker:
if camera_config.onvif.autotracking.enabled: if camera_config.onvif.autotracking.enabled:
if not self.autotracker_init[camera]: if not self.autotracker_init[camera]:
self._autotracker_setup(camera_config, camera) future = asyncio.run_coroutine_threadsafe(
self._autotracker_setup(camera_config, camera), self.onvif.loop
)
# Wait for the coroutine to complete
future.result()
if self.calibrating[camera]: if self.calibrating[camera]:
logger.debug(f"{camera}: Calibrating camera") logger.debug(f"{camera}: Calibrating camera")
@ -1479,7 +1484,8 @@ class PtzAutoTracker:
self.tracked_object[camera] = None self.tracked_object[camera] = None
self.tracked_object_history[camera].clear() self.tracked_object_history[camera].clear()
self.ptz_metrics[camera].motor_stopped.wait() while not self.ptz_metrics[camera].motor_stopped.is_set():
await self.onvif.get_camera_status(camera)
logger.debug( logger.debug(
f"{camera}: Time is {self.ptz_metrics[camera].frame_time.value}, returning to preset: {autotracker_config.return_preset}" f"{camera}: Time is {self.ptz_metrics[camera].frame_time.value}, returning to preset: {autotracker_config.return_preset}"
) )
@ -1489,7 +1495,7 @@ class PtzAutoTracker:
) )
# update stored zoom level from preset # update stored zoom level from preset
if not self.ptz_metrics[camera].motor_stopped.is_set(): while not self.ptz_metrics[camera].motor_stopped.is_set():
await self.onvif.get_camera_status(camera) await self.onvif.get_camera_status(camera)
self.ptz_metrics[camera].tracking_active.clear() self.ptz_metrics[camera].tracking_active.clear()

View File

@ -50,6 +50,8 @@ class OnvifController:
self.config = config self.config = config
self.ptz_metrics = ptz_metrics self.ptz_metrics = ptz_metrics
self.status_locks: dict[str, asyncio.Lock] = {}
# Create a dedicated event loop and run it in a separate thread # Create a dedicated event loop and run it in a separate thread
self.loop = asyncio.new_event_loop() self.loop = asyncio.new_event_loop()
self.loop_thread = threading.Thread(target=self._run_event_loop, daemon=True) self.loop_thread = threading.Thread(target=self._run_event_loop, daemon=True)
@ -61,6 +63,7 @@ class OnvifController:
continue continue
if cam.onvif.host: if cam.onvif.host:
self.camera_configs[cam_name] = cam self.camera_configs[cam_name] = cam
self.status_locks[cam_name] = asyncio.Lock()
asyncio.run_coroutine_threadsafe(self._init_cameras(), self.loop) asyncio.run_coroutine_threadsafe(self._init_cameras(), self.loop)
@ -827,105 +830,110 @@ class OnvifController:
return False return False
async def get_camera_status(self, camera_name: str) -> None: async def get_camera_status(self, camera_name: str) -> None:
if camera_name not in self.cams.keys(): async with self.status_locks[camera_name]:
logger.error(f"ONVIF is not configured for {camera_name}") if camera_name not in self.cams.keys():
return logger.error(f"ONVIF is not configured for {camera_name}")
if not self.cams[camera_name]["init"]:
if not await self._init_onvif(camera_name):
return return
status_request = self.cams[camera_name]["status_request"] if not self.cams[camera_name]["init"]:
try: if not await self._init_onvif(camera_name):
status = await self.cams[camera_name]["ptz"].GetStatus(status_request) return
except Exception:
pass # We're unsupported, that'll be reported in the next check.
try: status_request = self.cams[camera_name]["status_request"]
pan_tilt_status = getattr(status.MoveStatus, "PanTilt", None) try:
zoom_status = getattr(status.MoveStatus, "Zoom", None) status = await self.cams[camera_name]["ptz"].GetStatus(status_request)
except Exception:
pass # We're unsupported, that'll be reported in the next check.
# if it's not an attribute, see if MoveStatus even exists in the status result try:
if pan_tilt_status is None: pan_tilt_status = getattr(status.MoveStatus, "PanTilt", None)
pan_tilt_status = getattr(status, "MoveStatus", None) zoom_status = getattr(status.MoveStatus, "Zoom", None)
# we're unsupported # if it's not an attribute, see if MoveStatus even exists in the status result
if pan_tilt_status is None or pan_tilt_status not in [ if pan_tilt_status is None:
"IDLE", pan_tilt_status = getattr(status, "MoveStatus", None)
"MOVING",
]: # we're unsupported
raise Exception if pan_tilt_status is None or pan_tilt_status not in [
except Exception: "IDLE",
logger.warning( "MOVING",
f"Camera {camera_name} does not support the ONVIF GetStatus method. Autotracking will not function correctly and must be disabled in your config." ]:
raise Exception
except Exception:
logger.warning(
f"Camera {camera_name} does not support the ONVIF GetStatus method. Autotracking will not function correctly and must be disabled in your config."
)
return
logger.debug(
f"{camera_name}: Pan/tilt status: {pan_tilt_status}, Zoom status: {zoom_status}"
) )
return
logger.debug( if pan_tilt_status == "IDLE" and (
f"{camera_name}: Pan/tilt status: {pan_tilt_status}, Zoom status: {zoom_status}" zoom_status is None or zoom_status == "IDLE"
) ):
self.cams[camera_name]["active"] = False
if not self.ptz_metrics[camera_name].motor_stopped.is_set():
self.ptz_metrics[camera_name].motor_stopped.set()
if pan_tilt_status == "IDLE" and (zoom_status is None or zoom_status == "IDLE"): logger.debug(
self.cams[camera_name]["active"] = False f"{camera_name}: PTZ stop time: {self.ptz_metrics[camera_name].frame_time.value}"
if not self.ptz_metrics[camera_name].motor_stopped.is_set(): )
self.ptz_metrics[camera_name].motor_stopped.set()
self.ptz_metrics[camera_name].stop_time.value = self.ptz_metrics[
camera_name
].frame_time.value
else:
self.cams[camera_name]["active"] = True
if self.ptz_metrics[camera_name].motor_stopped.is_set():
self.ptz_metrics[camera_name].motor_stopped.clear()
logger.debug(
f"{camera_name}: PTZ start time: {self.ptz_metrics[camera_name].frame_time.value}"
)
self.ptz_metrics[camera_name].start_time.value = self.ptz_metrics[
camera_name
].frame_time.value
self.ptz_metrics[camera_name].stop_time.value = 0
if (
self.config.cameras[camera_name].onvif.autotracking.zooming
!= ZoomingModeEnum.disabled
):
# store absolute zoom level as 0 to 1 interpolated from the values of the camera
self.ptz_metrics[camera_name].zoom_level.value = numpy.interp(
round(status.Position.Zoom.x, 2),
[
self.cams[camera_name]["absolute_zoom_range"]["XRange"]["Min"],
self.cams[camera_name]["absolute_zoom_range"]["XRange"]["Max"],
],
[0, 1],
)
logger.debug( logger.debug(
f"{camera_name}: PTZ stop time: {self.ptz_metrics[camera_name].frame_time.value}" f"{camera_name}: Camera zoom level: {self.ptz_metrics[camera_name].zoom_level.value}"
) )
# some hikvision cams won't update MoveStatus, so warn if it hasn't changed
if (
not self.ptz_metrics[camera_name].motor_stopped.is_set()
and not self.ptz_metrics[camera_name].reset.is_set()
and self.ptz_metrics[camera_name].start_time.value != 0
and self.ptz_metrics[camera_name].frame_time.value
> (self.ptz_metrics[camera_name].start_time.value + 10)
and self.ptz_metrics[camera_name].stop_time.value == 0
):
logger.debug(
f"Start time: {self.ptz_metrics[camera_name].start_time.value}, Stop time: {self.ptz_metrics[camera_name].stop_time.value}, Frame time: {self.ptz_metrics[camera_name].frame_time.value}"
)
# set the stop time so we don't come back into this again and spam the logs
self.ptz_metrics[camera_name].stop_time.value = self.ptz_metrics[ self.ptz_metrics[camera_name].stop_time.value = self.ptz_metrics[
camera_name camera_name
].frame_time.value ].frame_time.value
else: logger.warning(
self.cams[camera_name]["active"] = True f"Camera {camera_name} is still in ONVIF 'MOVING' status."
if self.ptz_metrics[camera_name].motor_stopped.is_set():
self.ptz_metrics[camera_name].motor_stopped.clear()
logger.debug(
f"{camera_name}: PTZ start time: {self.ptz_metrics[camera_name].frame_time.value}"
) )
self.ptz_metrics[camera_name].start_time.value = self.ptz_metrics[
camera_name
].frame_time.value
self.ptz_metrics[camera_name].stop_time.value = 0
if (
self.config.cameras[camera_name].onvif.autotracking.zooming
!= ZoomingModeEnum.disabled
):
# store absolute zoom level as 0 to 1 interpolated from the values of the camera
self.ptz_metrics[camera_name].zoom_level.value = numpy.interp(
round(status.Position.Zoom.x, 2),
[
self.cams[camera_name]["absolute_zoom_range"]["XRange"]["Min"],
self.cams[camera_name]["absolute_zoom_range"]["XRange"]["Max"],
],
[0, 1],
)
logger.debug(
f"{camera_name}: Camera zoom level: {self.ptz_metrics[camera_name].zoom_level.value}"
)
# some hikvision cams won't update MoveStatus, so warn if it hasn't changed
if (
not self.ptz_metrics[camera_name].motor_stopped.is_set()
and not self.ptz_metrics[camera_name].reset.is_set()
and self.ptz_metrics[camera_name].start_time.value != 0
and self.ptz_metrics[camera_name].frame_time.value
> (self.ptz_metrics[camera_name].start_time.value + 10)
and self.ptz_metrics[camera_name].stop_time.value == 0
):
logger.debug(
f"Start time: {self.ptz_metrics[camera_name].start_time.value}, Stop time: {self.ptz_metrics[camera_name].stop_time.value}, Frame time: {self.ptz_metrics[camera_name].frame_time.value}"
)
# set the stop time so we don't come back into this again and spam the logs
self.ptz_metrics[camera_name].stop_time.value = self.ptz_metrics[
camera_name
].frame_time.value
logger.warning(f"Camera {camera_name} is still in ONVIF 'MOVING' status.")
def close(self) -> None: def close(self) -> None:
"""Gracefully shut down the ONVIF controller.""" """Gracefully shut down the ONVIF controller."""
if not hasattr(self, "loop") or self.loop.is_closed(): if not hasattr(self, "loop") or self.loop.is_closed():

View File

@ -346,7 +346,9 @@ export default function GeneralSettings({ className }: GeneralSettingsProps) {
<Portal> <Portal>
<SubItemContent <SubItemContent
className={ className={
isDesktop ? "" : "w-[92%] rounded-lg md:rounded-2xl" isDesktop
? ""
: "scrollbar-container max-h-[75dvh] w-[92%] overflow-y-scroll rounded-lg md:rounded-2xl"
} }
> >
<span tabIndex={0} className="sr-only" /> <span tabIndex={0} className="sr-only" />

View File

@ -433,137 +433,139 @@ function CustomTimeSelector({
className={`mt-3 flex items-center rounded-lg bg-secondary text-secondary-foreground ${isDesktop ? "mx-8 gap-2 px-2" : "pl-2"}`} className={`mt-3 flex items-center rounded-lg bg-secondary text-secondary-foreground ${isDesktop ? "mx-8 gap-2 px-2" : "pl-2"}`}
> >
<FaCalendarAlt /> <FaCalendarAlt />
<Popover <div className="flex flex-wrap items-center">
open={startOpen} <Popover
onOpenChange={(open) => { open={startOpen}
if (!open) { onOpenChange={(open) => {
setStartOpen(false); if (!open) {
}
}}
>
<PopoverTrigger asChild>
<Button
className={`text-primary ${isDesktop ? "" : "text-xs"}`}
aria-label={t("export.time.start.title")}
variant={startOpen ? "select" : "default"}
size="sm"
onClick={() => {
setStartOpen(true);
setEndOpen(false);
}}
>
{formattedStart}
</Button>
</PopoverTrigger>
<PopoverContent className="flex flex-col items-center">
<TimezoneAwareCalendar
timezone={config?.ui.timezone}
selectedDay={new Date(startTime * 1000)}
onSelect={(day) => {
if (!day) {
return;
}
setRange({
before: endTime,
after: day.getTime() / 1000 + 1,
});
}}
/>
<SelectSeparator className="bg-secondary" />
<input
className="text-md mx-4 w-full border border-input bg-background p-1 text-secondary-foreground hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
id="startTime"
type="time"
value={startClock}
step={isIOS ? "60" : "1"}
onChange={(e) => {
const clock = e.target.value;
const [hour, minute, second] = isIOS
? [...clock.split(":"), "00"]
: clock.split(":");
const start = new Date(startTime * 1000);
start.setHours(
parseInt(hour),
parseInt(minute),
parseInt(second ?? 0),
0,
);
setRange({
before: endTime,
after: start.getTime() / 1000,
});
}}
/>
</PopoverContent>
</Popover>
<FaArrowRight className="size-4 text-primary" />
<Popover
open={endOpen}
onOpenChange={(open) => {
if (!open) {
setEndOpen(false);
}
}}
>
<PopoverTrigger asChild>
<Button
className={`text-primary ${isDesktop ? "" : "text-xs"}`}
aria-label={t("export.time.end.title")}
variant={endOpen ? "select" : "default"}
size="sm"
onClick={() => {
setEndOpen(true);
setStartOpen(false); setStartOpen(false);
}} }
> }}
{formattedEnd} >
</Button> <PopoverTrigger asChild>
</PopoverTrigger> <Button
<PopoverContent className="flex flex-col items-center"> className={`text-primary ${isDesktop ? "" : "text-xs"}`}
<TimezoneAwareCalendar aria-label={t("export.time.start.title")}
timezone={config?.ui.timezone} variant={startOpen ? "select" : "default"}
selectedDay={new Date(endTime * 1000)} size="sm"
onSelect={(day) => { onClick={() => {
if (!day) { setStartOpen(true);
return; setEndOpen(false);
} }}
>
{formattedStart}
</Button>
</PopoverTrigger>
<PopoverContent className="flex flex-col items-center">
<TimezoneAwareCalendar
timezone={config?.ui.timezone}
selectedDay={new Date(startTime * 1000)}
onSelect={(day) => {
if (!day) {
return;
}
setRange({ setRange({
after: startTime, before: endTime,
before: day.getTime() / 1000, after: day.getTime() / 1000 + 1,
}); });
}} }}
/> />
<SelectSeparator className="bg-secondary" /> <SelectSeparator className="bg-secondary" />
<input <input
className="text-md mx-4 w-full border border-input bg-background p-1 text-secondary-foreground hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]" className="text-md mx-4 w-full border border-input bg-background p-1 text-secondary-foreground hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
id="startTime" id="startTime"
type="time" type="time"
value={endClock} value={startClock}
step={isIOS ? "60" : "1"} step={isIOS ? "60" : "1"}
onChange={(e) => { onChange={(e) => {
const clock = e.target.value; const clock = e.target.value;
const [hour, minute, second] = isIOS const [hour, minute, second] = isIOS
? [...clock.split(":"), "00"] ? [...clock.split(":"), "00"]
: clock.split(":"); : clock.split(":");
const end = new Date(endTime * 1000); const start = new Date(startTime * 1000);
end.setHours( start.setHours(
parseInt(hour), parseInt(hour),
parseInt(minute), parseInt(minute),
parseInt(second ?? 0), parseInt(second ?? 0),
0, 0,
); );
setRange({ setRange({
before: end.getTime() / 1000, before: endTime,
after: startTime, after: start.getTime() / 1000,
}); });
}} }}
/> />
</PopoverContent> </PopoverContent>
</Popover> </Popover>
<FaArrowRight className="size-4 text-primary" />
<Popover
open={endOpen}
onOpenChange={(open) => {
if (!open) {
setEndOpen(false);
}
}}
>
<PopoverTrigger asChild>
<Button
className={`text-primary ${isDesktop ? "" : "text-xs"}`}
aria-label={t("export.time.end.title")}
variant={endOpen ? "select" : "default"}
size="sm"
onClick={() => {
setEndOpen(true);
setStartOpen(false);
}}
>
{formattedEnd}
</Button>
</PopoverTrigger>
<PopoverContent className="flex flex-col items-center">
<TimezoneAwareCalendar
timezone={config?.ui.timezone}
selectedDay={new Date(endTime * 1000)}
onSelect={(day) => {
if (!day) {
return;
}
setRange({
after: startTime,
before: day.getTime() / 1000,
});
}}
/>
<SelectSeparator className="bg-secondary" />
<input
className="text-md mx-4 w-full border border-input bg-background p-1 text-secondary-foreground hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
id="startTime"
type="time"
value={endClock}
step={isIOS ? "60" : "1"}
onChange={(e) => {
const clock = e.target.value;
const [hour, minute, second] = isIOS
? [...clock.split(":"), "00"]
: clock.split(":");
const end = new Date(endTime * 1000);
end.setHours(
parseInt(hour),
parseInt(minute),
parseInt(second ?? 0),
0,
);
setRange({
before: end.getTime() / 1000,
after: startTime,
});
}}
/>
</PopoverContent>
</Popover>
</div>
</div> </div>
); );
} }

View File

@ -1,4 +1,10 @@
import React, { useState, useRef, useEffect, useCallback } from "react"; import React, {
useState,
useRef,
useEffect,
useCallback,
useMemo,
} from "react";
import { useVideoDimensions } from "@/hooks/use-video-dimensions"; import { useVideoDimensions } from "@/hooks/use-video-dimensions";
import HlsVideoPlayer from "./HlsVideoPlayer"; import HlsVideoPlayer from "./HlsVideoPlayer";
import ActivityIndicator from "../indicators/activity-indicator"; import ActivityIndicator from "../indicators/activity-indicator";
@ -89,6 +95,12 @@ export function GenericVideoPlayer({
}, },
); );
const hlsSource = useMemo(() => {
return {
playlist: source,
};
}, [source]);
return ( return (
<div ref={containerRef} className="relative flex h-full w-full flex-col"> <div ref={containerRef} className="relative flex h-full w-full flex-col">
<div className="relative flex flex-grow items-center justify-center"> <div className="relative flex flex-grow items-center justify-center">
@ -107,9 +119,7 @@ export function GenericVideoPlayer({
> >
<HlsVideoPlayer <HlsVideoPlayer
videoRef={videoRef} videoRef={videoRef}
currentSource={{ currentSource={hlsSource}
playlist: source,
}}
hotKeys hotKeys
visible visible
frigateControls={false} frigateControls={false}

View File

@ -123,13 +123,6 @@ export default function HlsVideoPlayer({
return; return;
} }
// we must destroy the hlsRef every time the source changes
// so that we can create a new HLS instance with startPosition
// set at the optimal point in time
if (hlsRef.current) {
hlsRef.current.destroy();
}
hlsRef.current = new Hls({ hlsRef.current = new Hls({
maxBufferLength: 10, maxBufferLength: 10,
maxBufferSize: 20 * 1000 * 1000, maxBufferSize: 20 * 1000 * 1000,
@ -138,6 +131,15 @@ export default function HlsVideoPlayer({
hlsRef.current.attachMedia(videoRef.current); hlsRef.current.attachMedia(videoRef.current);
hlsRef.current.loadSource(currentSource.playlist); hlsRef.current.loadSource(currentSource.playlist);
videoRef.current.playbackRate = currentPlaybackRate; videoRef.current.playbackRate = currentPlaybackRate;
return () => {
// we must destroy the hlsRef every time the source changes
// so that we can create a new HLS instance with startPosition
// set at the optimal point in time
if (hlsRef.current) {
hlsRef.current.destroy();
}
}
}, [videoRef, hlsRef, useHlsCompat, currentSource]); }, [videoRef, hlsRef, useHlsCompat, currentSource]);
// state handling // state handling

View File

@ -164,7 +164,7 @@ export default function JSMpegPlayer({
statsIntervalRef.current = setInterval(() => { statsIntervalRef.current = setInterval(() => {
const currentTimestamp = Date.now(); const currentTimestamp = Date.now();
const timeDiff = (currentTimestamp - lastTimestampRef.current) / 1000; // in seconds const timeDiff = (currentTimestamp - lastTimestampRef.current) / 1000; // in seconds
const bitrate = (bytesReceivedRef.current * 8) / timeDiff / 1000; // in kbps const bitrate = bytesReceivedRef.current / timeDiff / 1000; // in kBps
setStats?.({ setStats?.({
streamType: "jsmpeg", streamType: "jsmpeg",

View File

@ -82,7 +82,7 @@ export default function LivePlayer({
const [stats, setStats] = useState<PlayerStatsType>({ const [stats, setStats] = useState<PlayerStatsType>({
streamType: "-", streamType: "-",
bandwidth: 0, // in kbps bandwidth: 0, // in kBps
latency: undefined, // in seconds latency: undefined, // in seconds
totalFrames: 0, totalFrames: 0,
droppedFrames: undefined, droppedFrames: undefined,

View File

@ -338,7 +338,7 @@ function MSEPlayer({
// console.debug("VideoRTC.buffer", b.byteLength, bufLen); // console.debug("VideoRTC.buffer", b.byteLength, bufLen);
} else { } else {
try { try {
sb?.appendBuffer(data); sb?.appendBuffer(data as ArrayBuffer);
} catch (e) { } catch (e) {
// no-op // no-op
} }
@ -592,7 +592,7 @@ function MSEPlayer({
const now = Date.now(); const now = Date.now();
const bytesLoaded = totalBytesLoaded.current; const bytesLoaded = totalBytesLoaded.current;
const timeElapsed = (now - lastTimestamp) / 1000; // seconds const timeElapsed = (now - lastTimestamp) / 1000; // seconds
const bandwidth = (bytesLoaded - lastLoadedBytes) / timeElapsed / 1024; // kbps const bandwidth = (bytesLoaded - lastLoadedBytes) / timeElapsed / 1000; // kBps
lastLoadedBytes = bytesLoaded; lastLoadedBytes = bytesLoaded;
lastTimestamp = now; lastTimestamp = now;

View File

@ -17,7 +17,7 @@ export function PlayerStats({ stats, minimal }: PlayerStatsProps) {
</p> </p>
<p> <p>
<span className="text-white/70">{t("stats.bandwidth.title")}</span>{" "} <span className="text-white/70">{t("stats.bandwidth.title")}</span>{" "}
<span className="text-white">{stats.bandwidth.toFixed(2)} kbps</span> <span className="text-white">{stats.bandwidth.toFixed(2)} kBps</span>
</p> </p>
{stats.latency != undefined && ( {stats.latency != undefined && (
<p> <p>
@ -66,7 +66,7 @@ export function PlayerStats({ stats, minimal }: PlayerStatsProps) {
</div> </div>
<div className="flex flex-col items-center gap-1"> <div className="flex flex-col items-center gap-1">
<span className="text-white/70">{t("stats.bandwidth.short")}</span>{" "} <span className="text-white/70">{t("stats.bandwidth.short")}</span>{" "}
<span className="text-white">{stats.bandwidth.toFixed(2)} kbps</span> <span className="text-white">{stats.bandwidth.toFixed(2)} kBps</span>
</div> </div>
{stats.latency != undefined && ( {stats.latency != undefined && (
<div className="hidden flex-col items-center gap-1 md:flex"> <div className="hidden flex-col items-center gap-1 md:flex">

View File

@ -266,7 +266,7 @@ export default function WebRtcPlayer({
const bitrate = const bitrate =
timeDiff > 0 timeDiff > 0
? (bytesReceived - lastBytesReceived) / timeDiff / 1000 ? (bytesReceived - lastBytesReceived) / timeDiff / 1000
: 0; // in kbps : 0; // in kBps
setStats?.({ setStats?.({
streamType: "WebRTC", streamType: "WebRTC",

View File

@ -1,5 +1,5 @@
import { CameraConfig, FrigateConfig } from "@/types/frigateConfig"; import { CameraConfig, FrigateConfig } from "@/types/frigateConfig";
import { useCallback, useEffect, useState } from "react"; import { useCallback, useEffect, useState, useMemo } from "react";
import useSWR from "swr"; import useSWR from "swr";
import { LivePlayerMode, LiveStreamMetadata } from "@/types/live"; import { LivePlayerMode, LiveStreamMetadata } from "@/types/live";
@ -8,9 +8,54 @@ export default function useCameraLiveMode(
windowVisible: boolean, windowVisible: boolean,
) { ) {
const { data: config } = useSWR<FrigateConfig>("config"); const { data: config } = useSWR<FrigateConfig>("config");
const { data: allStreamMetadata } = useSWR<{
// Get comma-separated list of restreamed stream names for SWR key
const restreamedStreamsKey = useMemo(() => {
if (!cameras || !config) return null;
const streamNames = new Set<string>();
cameras.forEach((camera) => {
const isRestreamed = Object.keys(config.go2rtc.streams || {}).includes(
Object.values(camera.live.streams)[0],
);
if (isRestreamed) {
Object.values(camera.live.streams).forEach((streamName) => {
streamNames.add(streamName);
});
}
});
return streamNames.size > 0
? Array.from(streamNames).sort().join(",")
: null;
}, [cameras, config]);
const streamsFetcher = useCallback(async (key: string) => {
const streamNames = key.split(",");
const metadata: { [key: string]: LiveStreamMetadata } = {};
await Promise.all(
streamNames.map(async (streamName) => {
try {
const response = await fetch(`/api/go2rtc/streams/${streamName}`);
if (response.ok) {
const data = await response.json();
metadata[streamName] = data;
}
} catch (error) {
// eslint-disable-next-line no-console
console.error(`Failed to fetch metadata for ${streamName}:`, error);
}
}),
);
return metadata;
}, []);
const { data: allStreamMetadata = {} } = useSWR<{
[key: string]: LiveStreamMetadata; [key: string]: LiveStreamMetadata;
}>(config ? "go2rtc/streams" : null, { revalidateOnFocus: false }); }>(restreamedStreamsKey, streamsFetcher, { revalidateOnFocus: false });
const [preferredLiveModes, setPreferredLiveModes] = useState<{ const [preferredLiveModes, setPreferredLiveModes] = useState<{
[key: string]: LivePlayerMode; [key: string]: LivePlayerMode;

View File

@ -17,7 +17,7 @@ export function useVideoDimensions(
}); });
const videoAspectRatio = useMemo(() => { const videoAspectRatio = useMemo(() => {
return videoResolution.width / videoResolution.height; return videoResolution.width / videoResolution.height || 16 / 9;
}, [videoResolution]); }, [videoResolution]);
const containerAspectRatio = useMemo(() => { const containerAspectRatio = useMemo(() => {
@ -25,8 +25,8 @@ export function useVideoDimensions(
}, [containerWidth, containerHeight]); }, [containerWidth, containerHeight]);
const videoDimensions = useMemo(() => { const videoDimensions = useMemo(() => {
if (!containerWidth || !containerHeight || !videoAspectRatio) if (!containerWidth || !containerHeight)
return { width: "100%", height: "100%" }; return { aspectRatio: "16 / 9", width: "100%" };
if (containerAspectRatio > videoAspectRatio) { if (containerAspectRatio > videoAspectRatio) {
const height = containerHeight; const height = containerHeight;
const width = height * videoAspectRatio; const width = height * videoAspectRatio;

View File

@ -76,7 +76,11 @@ export default function Settings() {
const isAdmin = useIsAdmin(); const isAdmin = useIsAdmin();
const allowedViewsForViewer: SettingsType[] = ["ui", "debug"]; const allowedViewsForViewer: SettingsType[] = [
"ui",
"debug",
"notifications",
];
const visibleSettingsViews = !isAdmin const visibleSettingsViews = !isAdmin
? allowedViewsForViewer ? allowedViewsForViewer
: allSettingsViews; : allSettingsViews;
@ -167,7 +171,7 @@ export default function Settings() {
useSearchEffect("page", (page: string) => { useSearchEffect("page", (page: string) => {
if (allSettingsViews.includes(page as SettingsType)) { if (allSettingsViews.includes(page as SettingsType)) {
// Restrict viewer to UI settings // Restrict viewer to UI settings
if (!isAdmin && !["ui", "debug"].includes(page)) { if (!isAdmin && !allowedViewsForViewer.includes(page as SettingsType)) {
setPage("ui"); setPage("ui");
} else { } else {
setPage(page as SettingsType); setPage(page as SettingsType);
@ -203,7 +207,7 @@ export default function Settings() {
onValueChange={(value: SettingsType) => { onValueChange={(value: SettingsType) => {
if (value) { if (value) {
// Restrict viewer navigation // Restrict viewer navigation
if (!isAdmin && !["ui", "debug"].includes(value)) { if (!isAdmin && !allowedViewsForViewer.includes(value)) {
setPageToggle("ui"); setPageToggle("ui");
} else { } else {
setPageToggle(value); setPageToggle(value);

View File

@ -46,6 +46,8 @@ import { Trans, useTranslation } from "react-i18next";
import { useDateLocale } from "@/hooks/use-date-locale"; import { useDateLocale } from "@/hooks/use-date-locale";
import { useDocDomain } from "@/hooks/use-doc-domain"; import { useDocDomain } from "@/hooks/use-doc-domain";
import { CameraNameLabel } from "@/components/camera/CameraNameLabel"; import { CameraNameLabel } from "@/components/camera/CameraNameLabel";
import { useIsAdmin } from "@/hooks/use-is-admin";
import { cn } from "@/lib/utils";
const NOTIFICATION_SERVICE_WORKER = "notifications-worker.js"; const NOTIFICATION_SERVICE_WORKER = "notifications-worker.js";
@ -64,6 +66,10 @@ export default function NotificationView({
const { t } = useTranslation(["views/settings"]); const { t } = useTranslation(["views/settings"]);
const { getLocaleDocUrl } = useDocDomain(); const { getLocaleDocUrl } = useDocDomain();
// roles
const isAdmin = useIsAdmin();
const { data: config, mutate: updateConfig } = useSWR<FrigateConfig>( const { data: config, mutate: updateConfig } = useSWR<FrigateConfig>(
"config", "config",
{ {
@ -380,7 +386,11 @@ export default function NotificationView({
<div className="flex size-full flex-col md:flex-row"> <div className="flex size-full flex-col md:flex-row">
<Toaster position="top-center" closeButton={true} /> <Toaster position="top-center" closeButton={true} />
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto rounded-lg border-[1px] border-secondary-foreground bg-background_alt p-2 md:order-none md:mb-0 md:mr-2 md:mt-0"> <div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto rounded-lg border-[1px] border-secondary-foreground bg-background_alt p-2 md:order-none md:mb-0 md:mr-2 md:mt-0">
<div className="grid w-full grid-cols-1 gap-4 md:grid-cols-2"> <div
className={cn(
isAdmin && "grid w-full grid-cols-1 gap-4 md:grid-cols-2",
)}
>
<div className="col-span-1"> <div className="col-span-1">
<Heading as="h3" className="my-2"> <Heading as="h3" className="my-2">
{t("notification.notificationSettings.title")} {t("notification.notificationSettings.title")}
@ -403,139 +413,152 @@ export default function NotificationView({
</div> </div>
</div> </div>
<Form {...form}> {isAdmin && (
<form <Form {...form}>
onSubmit={form.handleSubmit(onSubmit)} <form
className="mt-2 space-y-6" onSubmit={form.handleSubmit(onSubmit)}
> className="mt-2 space-y-6"
<FormField >
control={form.control} <FormField
name="email" control={form.control}
render={({ field }) => ( name="email"
<FormItem> render={({ field }) => (
<FormLabel>{t("notification.email.title")}</FormLabel> <FormItem>
<FormControl> <FormLabel>{t("notification.email.title")}</FormLabel>
<Input <FormControl>
className="text-md w-full border border-input bg-background p-2 hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark] md:w-72" <Input
placeholder={t("notification.email.placeholder")} className="text-md w-full border border-input bg-background p-2 hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark] md:w-72"
{...field} placeholder={t("notification.email.placeholder")}
/> {...field}
</FormControl> />
<FormDescription> </FormControl>
{t("notification.email.desc")} <FormDescription>
</FormDescription> {t("notification.email.desc")}
<FormMessage /> </FormDescription>
</FormItem> <FormMessage />
)} </FormItem>
/> )}
/>
<FormField <FormField
control={form.control} control={form.control}
name="cameras" name="cameras"
render={({ field }) => ( render={({ field }) => (
<FormItem> <FormItem>
{allCameras && allCameras?.length > 0 ? ( {allCameras && allCameras?.length > 0 ? (
<> <>
<div className="mb-2"> <div className="mb-2">
<FormLabel className="flex flex-row items-center text-base"> <FormLabel className="flex flex-row items-center text-base">
{t("notification.cameras.title")} {t("notification.cameras.title")}
</FormLabel> </FormLabel>
</div> </div>
<div className="max-w-md space-y-2 rounded-lg bg-secondary p-4"> <div className="max-w-md space-y-2 rounded-lg bg-secondary p-4">
<FormField <FormField
control={form.control} control={form.control}
name="allEnabled" name="allEnabled"
render={({ field }) => ( render={({ field }) => (
<FilterSwitch
label={t("cameras.all.title", {
ns: "components/filter",
})}
isChecked={field.value}
onCheckedChange={(checked) => {
setChangedValue(true);
if (checked) {
form.setValue("cameras", []);
}
field.onChange(checked);
}}
/>
)}
/>
{allCameras?.map((camera) => (
<FilterSwitch <FilterSwitch
label={t("cameras.all.title", { key={camera.name}
ns: "components/filter", label={camera.name}
})} isCameraName={true}
isChecked={field.value} isChecked={field.value?.includes(
camera.name,
)}
onCheckedChange={(checked) => { onCheckedChange={(checked) => {
setChangedValue(true); setChangedValue(true);
let newCameras;
if (checked) { if (checked) {
form.setValue("cameras", []); newCameras = [
...field.value,
camera.name,
];
} else {
newCameras = field.value?.filter(
(value) => value !== camera.name,
);
} }
field.onChange(checked); field.onChange(newCameras);
form.setValue("allEnabled", false);
}} }}
/> />
)} ))}
/> </div>
{allCameras?.map((camera) => ( </>
<FilterSwitch ) : (
key={camera.name} <div className="font-normal text-destructive">
label={camera.name} {t("notification.cameras.noCameras")}
isCameraName={true}
isChecked={field.value?.includes(camera.name)}
onCheckedChange={(checked) => {
setChangedValue(true);
let newCameras;
if (checked) {
newCameras = [
...field.value,
camera.name,
];
} else {
newCameras = field.value?.filter(
(value) => value !== camera.name,
);
}
field.onChange(newCameras);
form.setValue("allEnabled", false);
}}
/>
))}
</div> </div>
</> )}
) : (
<div className="font-normal text-destructive">
{t("notification.cameras.noCameras")}
</div>
)}
<FormMessage /> <FormMessage />
<FormDescription> <FormDescription>
{t("notification.cameras.desc")} {t("notification.cameras.desc")}
</FormDescription> </FormDescription>
</FormItem> </FormItem>
)}
/>
<div className="flex w-full flex-row items-center gap-2 pt-2 md:w-[50%]">
<Button
className="flex flex-1"
aria-label={t("button.cancel", { ns: "common" })}
onClick={onCancel}
type="button"
>
{t("button.cancel", { ns: "common" })}
</Button>
<Button
variant="select"
disabled={isLoading}
className="flex flex-1"
aria-label={t("button.save", { ns: "common" })}
type="submit"
>
{isLoading ? (
<div className="flex flex-row items-center gap-2">
<ActivityIndicator />
<span>{t("button.saving", { ns: "common" })}</span>
</div>
) : (
t("button.save", { ns: "common" })
)} )}
</Button> />
</div>
</form> <div className="flex w-full flex-row items-center gap-2 pt-2 md:w-[50%]">
</Form> <Button
className="flex flex-1"
aria-label={t("button.cancel", { ns: "common" })}
onClick={onCancel}
type="button"
>
{t("button.cancel", { ns: "common" })}
</Button>
<Button
variant="select"
disabled={isLoading}
className="flex flex-1"
aria-label={t("button.save", { ns: "common" })}
type="submit"
>
{isLoading ? (
<div className="flex flex-row items-center gap-2">
<ActivityIndicator />
<span>{t("button.saving", { ns: "common" })}</span>
</div>
) : (
t("button.save", { ns: "common" })
)}
</Button>
</div>
</form>
</Form>
)}
</div> </div>
<div className="col-span-1"> <div className="col-span-1">
<div className="mt-4 gap-2 space-y-6"> <div className="mt-4 gap-2 space-y-6">
<div className="flex flex-col gap-2 md:max-w-[50%]"> <div
<Separator className="my-2 flex bg-secondary md:hidden" /> className={cn(
<Heading as="h4" className="my-2"> isAdmin && "flex flex-col gap-2 md:max-w-[50%]",
)}
>
<Separator
className={cn(
"my-2 flex bg-secondary",
isAdmin && "md:hidden",
)}
/>
<Heading as="h4" className={cn(isAdmin ? "my-2" : "my-4")}>
{t("notification.deviceSpecific")} {t("notification.deviceSpecific")}
</Heading> </Heading>
<Button <Button
@ -581,7 +604,7 @@ export default function NotificationView({
? t("notification.unregisterDevice") ? t("notification.unregisterDevice")
: t("notification.registerDevice")} : t("notification.registerDevice")}
</Button> </Button>
{registration != null && registration.active && ( {isAdmin && registration != null && registration.active && (
<Button <Button
aria-label={t("notification.sendTestNotification")} aria-label={t("notification.sendTestNotification")}
onClick={() => sendTestNotification("notification_test")} onClick={() => sendTestNotification("notification_test")}
@ -591,7 +614,7 @@ export default function NotificationView({
)} )}
</div> </div>
</div> </div>
{notificationCameras.length > 0 && ( {isAdmin && notificationCameras.length > 0 && (
<div className="mt-4 gap-2 space-y-6"> <div className="mt-4 gap-2 space-y-6">
<div className="space-y-3"> <div className="space-y-3">
<Separator className="my-2 flex bg-secondary" /> <Separator className="my-2 flex bg-secondary" />