mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-09-05 17:51:36 +02:00
Merge remote-tracking branch 'origin/master' into dev
This commit is contained in:
commit
a8b7e5dd24
@ -70,9 +70,16 @@ fi
|
||||
|
||||
# arch specific packages
|
||||
if [[ "${TARGETARCH}" == "amd64" ]]; then
|
||||
# Install non-free version of i965 driver
|
||||
sed -i -E "/^Components: main$/s/main/main contrib non-free non-free-firmware/" "/etc/apt/sources.list.d/debian.sources" \
|
||||
&& apt-get -qq update \
|
||||
&& apt-get install --no-install-recommends --no-install-suggests -y i965-va-driver-shaders \
|
||||
&& sed -i -E "/^Components: main contrib non-free non-free-firmware$/s/main contrib non-free non-free-firmware/main/" "/etc/apt/sources.list.d/debian.sources" \
|
||||
&& apt-get update
|
||||
|
||||
# install amd / intel-i965 driver packages
|
||||
apt-get -qq install --no-install-recommends --no-install-suggests -y \
|
||||
i965-va-driver intel-gpu-tools onevpl-tools \
|
||||
intel-gpu-tools onevpl-tools \
|
||||
libva-drm2 \
|
||||
mesa-va-drivers radeontop
|
||||
|
||||
|
@ -144,7 +144,13 @@ WEB Digest Algorithm - MD5
|
||||
|
||||
### Reolink Cameras
|
||||
|
||||
Reolink has older cameras (ex: 410 & 520) as well as newer camera (ex: 520a & 511wa) which support different subsets of options. In both cases using the http stream is recommended.
|
||||
Reolink has many different camera models with inconsistently supported features and behavior. The below table shows a summary of various features and recommendations.
|
||||
|
||||
| Camera Resolution | Camera Generation | Recommended Stream Type | Additional Notes |
|
||||
| 5MP or lower | All | http-flv | Stream is h264 |
|
||||
| 6MP or higher | Latest (ex: Duo3, CX-8##) | http-flv with ffmpeg 8.0, or rtsp | This uses the new http-flv-enhanced over H265 which requires ffmpeg 8.0 |
|
||||
| 6MP or higher | Older (ex: RLC-8##) | rtsp | |
|
||||
|
||||
Frigate works much better with newer reolink cameras that are setup with the below options:
|
||||
|
||||
If available, recommended settings are:
|
||||
@ -157,12 +163,6 @@ According to [this discussion](https://github.com/blakeblackshear/frigate/issues
|
||||
Cameras connected via a Reolink NVR can be connected with the http stream, use `channel[0..15]` in the stream url for the additional channels.
|
||||
The setup of main stream can be also done via RTSP, but isn't always reliable on all hardware versions. The example configuration is working with the oldest HW version RLN16-410 device with multiple types of cameras.
|
||||
|
||||
:::warning
|
||||
|
||||
The below configuration only works for reolink cameras with stream resolution of 5MP or lower, 8MP+ cameras need to use RTSP as http-flv is not supported in this case.
|
||||
|
||||
:::
|
||||
|
||||
```yaml
|
||||
go2rtc:
|
||||
streams:
|
||||
|
@ -111,10 +111,7 @@ The FeatureList on the [ONVIF Conformant Products Database](https://www.onvif.or
|
||||
| Hanwha XNP-6550RH | ✅ | ❌ | |
|
||||
| Hikvision | ✅ | ❌ | Incomplete ONVIF support (MoveStatus won't update even on latest firmware) - reported with HWP-N4215IH-DE and DS-2DE3304W-DE, but likely others |
|
||||
| Hikvision DS-2DE3A404IWG-E/W | ✅ | ✅ | |
|
||||
| Reolink 511WA | ✅ | ❌ | Zoom only |
|
||||
| Reolink E1 Pro | ✅ | ❌ | |
|
||||
| Reolink E1 Zoom | ✅ | ❌ | |
|
||||
| Reolink RLC-823A 16x | ✅ | ❌ | |
|
||||
| Reolink | ✅ | ❌ | |
|
||||
| Speco O8P32X | ✅ | ❌ | |
|
||||
| Sunba 405-D20X | ✅ | ❌ | Incomplete ONVIF support reported on original, and 4k models. All models are suspected incompatable. |
|
||||
| Tapo | ✅ | ❌ | Many models supported, ONVIF Service Port: 2020 |
|
||||
|
@ -21,8 +21,7 @@ See [the hwaccel docs](/configuration/hardware_acceleration_video.md) for more i
|
||||
| preset-nvidia | Nvidia GPU | |
|
||||
| preset-jetson-h264 | Nvidia Jetson with h264 stream | |
|
||||
| preset-jetson-h265 | Nvidia Jetson with h265 stream | |
|
||||
| preset-rk-h264 | Rockchip MPP with h264 stream | Use image with \*-rk suffix and privileged mode |
|
||||
| preset-rk-h265 | Rockchip MPP with h265 stream | Use image with \*-rk suffix and privileged mode |
|
||||
| preset-rkmpp | Rockchip MPP | Use image with \*-rk suffix and privileged mode |
|
||||
|
||||
### Input Args Presets
|
||||
|
||||
|
@ -9,7 +9,6 @@ It is highly recommended to use a GPU for hardware acceleration video decoding i
|
||||
|
||||
Depending on your system, these parameters may not be compatible. More information on hardware accelerated decoding for ffmpeg can be found here: https://trac.ffmpeg.org/wiki/HWAccelIntro
|
||||
|
||||
# Object Detection
|
||||
|
||||
## Raspberry Pi 3/4
|
||||
|
||||
@ -229,7 +228,7 @@ Additional configuration is needed for the Docker container to be able to access
|
||||
services:
|
||||
frigate:
|
||||
...
|
||||
image: ghcr.io/blakeblackshear/frigate:stable
|
||||
image: ghcr.io/blakeblackshear/frigate:stable-tensorrt
|
||||
deploy: # <------------- Add this section
|
||||
resources:
|
||||
reservations:
|
||||
@ -247,7 +246,7 @@ docker run -d \
|
||||
--name frigate \
|
||||
...
|
||||
--gpus=all \
|
||||
ghcr.io/blakeblackshear/frigate:stable
|
||||
ghcr.io/blakeblackshear/frigate:stable-tensorrt
|
||||
```
|
||||
|
||||
### Setup Decoder
|
||||
|
@ -499,14 +499,13 @@ Also AMD/ROCm does not "officially" support integrated GPUs. It still does work
|
||||
|
||||
For the rocm frigate build there is some automatic detection:
|
||||
|
||||
- gfx90c -> 9.0.0
|
||||
- gfx1031 -> 10.3.0
|
||||
- gfx1103 -> 11.0.0
|
||||
|
||||
If you have something else you might need to override the `HSA_OVERRIDE_GFX_VERSION` at Docker launch. Suppose the version you want is `9.0.0`, then you should configure it from command line as:
|
||||
If you have something else you might need to override the `HSA_OVERRIDE_GFX_VERSION` at Docker launch. Suppose the version you want is `10.0.0`, then you should configure it from command line as:
|
||||
|
||||
```bash
|
||||
$ docker run -e HSA_OVERRIDE_GFX_VERSION=9.0.0 \
|
||||
$ docker run -e HSA_OVERRIDE_GFX_VERSION=10.0.0 \
|
||||
...
|
||||
```
|
||||
|
||||
@ -517,7 +516,7 @@ services:
|
||||
frigate:
|
||||
|
||||
environment:
|
||||
HSA_OVERRIDE_GFX_VERSION: "9.0.0"
|
||||
HSA_OVERRIDE_GFX_VERSION: "10.0.0"
|
||||
```
|
||||
|
||||
Figuring out what version you need can be complicated as you can't tell the chipset name and driver from the AMD brand name.
|
||||
|
@ -43,7 +43,7 @@ The following ports are used by Frigate and can be mapped via docker as required
|
||||
| `8971` | Authenticated UI and API access without TLS. Reverse proxies should use this port. |
|
||||
| `5000` | Internal unauthenticated UI and API access. Access to this port should be limited. Intended to be used within the docker network for services that integrate with Frigate. |
|
||||
| `8554` | RTSP restreaming. By default, these streams are unauthenticated. Authentication can be configured in go2rtc section of config. |
|
||||
| `8555` | WebRTC connections for low latency live views. |
|
||||
| `8555` | WebRTC connections for cameras with two-way talk support. |
|
||||
|
||||
#### Common Docker Compose storage configurations
|
||||
|
||||
|
@ -15,10 +15,10 @@ At a high level, there are five processing steps that could be applied to a came
|
||||
%%{init: {"themeVariables": {"edgeLabelBackground": "transparent"}}}%%
|
||||
|
||||
flowchart LR
|
||||
Feed(Feed\nacquisition) --> Decode(Video\ndecoding)
|
||||
Decode --> Motion(Motion\ndetection)
|
||||
Motion --> Object(Object\ndetection)
|
||||
Feed --> Recording(Recording\nand\nvisualization)
|
||||
Feed(Feed acquisition) --> Decode(Video decoding)
|
||||
Decode --> Motion(Motion detection)
|
||||
Motion --> Object(Object detection)
|
||||
Feed --> Recording(Recording and visualization)
|
||||
Motion --> Recording
|
||||
Object --> Recording
|
||||
```
|
||||
|
@ -114,7 +114,7 @@ section.
|
||||
## Next steps
|
||||
|
||||
1. If the stream you added to go2rtc is also used by Frigate for the `record` or `detect` role, you can migrate your config to pull from the RTSP restream to reduce the number of connections to your camera as shown [here](/configuration/restream#reduce-connections-to-camera).
|
||||
2. You may also prefer to [setup WebRTC](/configuration/live#webrtc-extra-configuration) for slightly lower latency than MSE. Note that WebRTC only supports h264 and specific audio formats and may require opening ports on your router.
|
||||
2. You can [set up WebRTC](/configuration/live#webrtc-extra-configuration) if your camera supports two-way talk. Note that WebRTC only supports specific audio formats and may require opening ports on your router.
|
||||
|
||||
## Important considerations
|
||||
|
||||
|
8
docs/static/frigate-api.yaml
vendored
8
docs/static/frigate-api.yaml
vendored
@ -1759,6 +1759,10 @@ paths:
|
||||
- name: include_thumbnails
|
||||
in: query
|
||||
required: false
|
||||
description: >
|
||||
Deprecated. Thumbnail data is no longer included in the response.
|
||||
Use the /api/events/:event_id/thumbnail.:extension endpoint instead.
|
||||
deprecated: true
|
||||
schema:
|
||||
anyOf:
|
||||
- type: integer
|
||||
@ -1973,6 +1977,10 @@ paths:
|
||||
- name: include_thumbnails
|
||||
in: query
|
||||
required: false
|
||||
description: >
|
||||
Deprecated. Thumbnail data is no longer included in the response.
|
||||
Use the /api/events/:event_id/thumbnail.:extension endpoint instead.
|
||||
deprecated: true
|
||||
schema:
|
||||
anyOf:
|
||||
- type: integer
|
||||
|
@ -218,7 +218,7 @@ async def register_face(request: Request, name: str, file: UploadFile):
|
||||
)
|
||||
|
||||
context: EmbeddingsContext = request.app.embeddings
|
||||
result = context.register_face(name, await file.read())
|
||||
result = None if context is None else context.register_face(name, await file.read())
|
||||
|
||||
if not isinstance(result, dict):
|
||||
return JSONResponse(
|
||||
|
@ -1,6 +1,6 @@
|
||||
from typing import Optional
|
||||
|
||||
from pydantic import BaseModel
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
DEFAULT_TIME_RANGE = "00:00,24:00"
|
||||
|
||||
@ -21,7 +21,14 @@ class EventsQueryParams(BaseModel):
|
||||
has_clip: Optional[int] = None
|
||||
has_snapshot: Optional[int] = None
|
||||
in_progress: Optional[int] = None
|
||||
include_thumbnails: Optional[int] = 1
|
||||
include_thumbnails: Optional[int] = Field(
|
||||
1,
|
||||
description=(
|
||||
"Deprecated. Thumbnail data is no longer included in the response. "
|
||||
"Use the /api/events/:event_id/thumbnail.:extension endpoint instead."
|
||||
),
|
||||
deprecated=True,
|
||||
)
|
||||
favorites: Optional[int] = None
|
||||
min_score: Optional[float] = None
|
||||
max_score: Optional[float] = None
|
||||
@ -40,7 +47,14 @@ class EventsSearchQueryParams(BaseModel):
|
||||
query: Optional[str] = None
|
||||
event_id: Optional[str] = None
|
||||
search_type: Optional[str] = "thumbnail"
|
||||
include_thumbnails: Optional[int] = 1
|
||||
include_thumbnails: Optional[int] = Field(
|
||||
1,
|
||||
description=(
|
||||
"Deprecated. Thumbnail data is no longer included in the response. "
|
||||
"Use the /api/events/:event_id/thumbnail.:extension endpoint instead."
|
||||
),
|
||||
deprecated=True,
|
||||
)
|
||||
limit: Optional[int] = 50
|
||||
cameras: Optional[str] = "all"
|
||||
labels: Optional[str] = "all"
|
||||
|
@ -11,6 +11,11 @@ class Extension(str, Enum):
|
||||
jpg = "jpg"
|
||||
jpeg = "jpeg"
|
||||
|
||||
def get_mime_type(self) -> str:
|
||||
if self in (Extension.jpg, Extension.jpeg):
|
||||
return "image/jpeg"
|
||||
return f"image/{self.value}"
|
||||
|
||||
|
||||
class MediaLatestFrameQueryParams(BaseModel):
|
||||
bbox: Optional[int] = None
|
||||
|
@ -145,15 +145,13 @@ def latest_frame(
|
||||
"regions": params.regions,
|
||||
}
|
||||
quality = params.quality
|
||||
mime_type = extension
|
||||
|
||||
if extension == "png":
|
||||
if extension == Extension.png:
|
||||
quality_params = None
|
||||
elif extension == "webp":
|
||||
elif extension == Extension.webp:
|
||||
quality_params = [int(cv2.IMWRITE_WEBP_QUALITY), quality]
|
||||
else:
|
||||
else: # jpg or jpeg
|
||||
quality_params = [int(cv2.IMWRITE_JPEG_QUALITY), quality]
|
||||
mime_type = "jpeg"
|
||||
|
||||
if camera_name in request.app.frigate_config.cameras:
|
||||
frame = frame_processor.get_current_frame(camera_name, draw_options)
|
||||
@ -196,18 +194,21 @@ def latest_frame(
|
||||
|
||||
frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA)
|
||||
|
||||
_, img = cv2.imencode(f".{extension}", frame, quality_params)
|
||||
_, img = cv2.imencode(f".{extension.value}", frame, quality_params)
|
||||
return Response(
|
||||
content=img.tobytes(),
|
||||
media_type=f"image/{mime_type}",
|
||||
media_type=extension.get_mime_type(),
|
||||
headers={
|
||||
"Content-Type": f"image/{mime_type}",
|
||||
"Cache-Control": "no-store"
|
||||
if not params.store
|
||||
else "private, max-age=60",
|
||||
},
|
||||
)
|
||||
elif camera_name == "birdseye" and request.app.frigate_config.birdseye.restream:
|
||||
elif (
|
||||
camera_name == "birdseye"
|
||||
and request.app.frigate_config.birdseye.enabled
|
||||
and request.app.frigate_config.birdseye.restream
|
||||
):
|
||||
frame = cv2.cvtColor(
|
||||
frame_processor.get_current_frame(camera_name),
|
||||
cv2.COLOR_YUV2BGR_I420,
|
||||
@ -218,12 +219,11 @@ def latest_frame(
|
||||
|
||||
frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA)
|
||||
|
||||
_, img = cv2.imencode(f".{extension}", frame, quality_params)
|
||||
_, img = cv2.imencode(f".{extension.value}", frame, quality_params)
|
||||
return Response(
|
||||
content=img.tobytes(),
|
||||
media_type=f"image/{mime_type}",
|
||||
media_type=extension.get_mime_type(),
|
||||
headers={
|
||||
"Content-Type": f"image/{mime_type}",
|
||||
"Cache-Control": "no-store"
|
||||
if not params.store
|
||||
else "private, max-age=60",
|
||||
@ -812,7 +812,10 @@ def vod_hour(year_month: str, day: int, hour: int, camera_name: str, tz_name: st
|
||||
"/vod/event/{event_id}",
|
||||
description="Returns an HLS playlist for the specified object. Append /master.m3u8 or /index.m3u8 for HLS playback.",
|
||||
)
|
||||
def vod_event(event_id: str):
|
||||
def vod_event(
|
||||
event_id: str,
|
||||
padding: int = Query(0, description="Padding to apply to the vod."),
|
||||
):
|
||||
try:
|
||||
event: Event = Event.get(Event.id == event_id)
|
||||
except DoesNotExist:
|
||||
@ -835,13 +838,13 @@ def vod_event(event_id: str):
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
clip_path = os.path.join(CLIPS_DIR, f"{event.camera}-{event.id}.mp4")
|
||||
|
||||
if not os.path.isfile(clip_path):
|
||||
end_ts = (
|
||||
datetime.now().timestamp() if event.end_time is None else event.end_time
|
||||
datetime.now().timestamp()
|
||||
if event.end_time is None
|
||||
else (event.end_time + padding)
|
||||
)
|
||||
vod_response = vod_ts(event.camera, event.start_time, end_ts)
|
||||
vod_response = vod_ts(event.camera, event.start_time - padding, end_ts)
|
||||
|
||||
# If the recordings are not found and the event started more than 5 minutes ago, set has_clip to false
|
||||
if (
|
||||
event.start_time < datetime.now().timestamp() - 300
|
||||
@ -850,17 +853,8 @@ def vod_event(event_id: str):
|
||||
and vod_response[1] == 404
|
||||
):
|
||||
Event.update(has_clip=False).where(Event.id == event_id).execute()
|
||||
return vod_response
|
||||
|
||||
duration = int((event.end_time - event.start_time) * 1000)
|
||||
return JSONResponse(
|
||||
content={
|
||||
"cache": True,
|
||||
"discontinuity": False,
|
||||
"durations": [duration],
|
||||
"sequences": [{"clips": [{"type": "source", "path": clip_path}]}],
|
||||
}
|
||||
)
|
||||
return vod_response
|
||||
|
||||
|
||||
@router.get(
|
||||
@ -941,7 +935,7 @@ def event_snapshot(
|
||||
def event_thumbnail(
|
||||
request: Request,
|
||||
event_id: str,
|
||||
extension: str,
|
||||
extension: Extension,
|
||||
max_cache_age: int = Query(
|
||||
2592000, description="Max cache age in seconds. Default 30 days in seconds."
|
||||
),
|
||||
@ -966,7 +960,7 @@ def event_thumbnail(
|
||||
if event_id in camera_state.tracked_objects:
|
||||
tracked_obj = camera_state.tracked_objects.get(event_id)
|
||||
if tracked_obj is not None:
|
||||
thumbnail_bytes = tracked_obj.get_thumbnail(extension)
|
||||
thumbnail_bytes = tracked_obj.get_thumbnail(extension.value)
|
||||
except Exception:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "Event not found"},
|
||||
@ -994,23 +988,21 @@ def event_thumbnail(
|
||||
)
|
||||
|
||||
quality_params = None
|
||||
|
||||
if extension == "jpg" or extension == "jpeg":
|
||||
if extension in (Extension.jpg, Extension.jpeg):
|
||||
quality_params = [int(cv2.IMWRITE_JPEG_QUALITY), 70]
|
||||
elif extension == "webp":
|
||||
elif extension == Extension.webp:
|
||||
quality_params = [int(cv2.IMWRITE_WEBP_QUALITY), 60]
|
||||
|
||||
_, img = cv2.imencode(f".{extension}", thumbnail, quality_params)
|
||||
_, img = cv2.imencode(f".{extension.value}", thumbnail, quality_params)
|
||||
thumbnail_bytes = img.tobytes()
|
||||
|
||||
return Response(
|
||||
thumbnail_bytes,
|
||||
media_type=f"image/{extension}",
|
||||
media_type=extension.get_mime_type(),
|
||||
headers={
|
||||
"Cache-Control": f"private, max-age={max_cache_age}"
|
||||
if event_complete
|
||||
else "no-store",
|
||||
"Content-Type": f"image/{extension}",
|
||||
},
|
||||
)
|
||||
|
||||
@ -1221,7 +1213,11 @@ def event_snapshot_clean(request: Request, event_id: str, download: bool = False
|
||||
|
||||
|
||||
@router.get("/events/{event_id}/clip.mp4")
|
||||
def event_clip(request: Request, event_id: str):
|
||||
def event_clip(
|
||||
request: Request,
|
||||
event_id: str,
|
||||
padding: int = Query(0, description="Padding to apply to clip."),
|
||||
):
|
||||
try:
|
||||
event: Event = Event.get(Event.id == event_id)
|
||||
except DoesNotExist:
|
||||
@ -1234,8 +1230,12 @@ def event_clip(request: Request, event_id: str):
|
||||
content={"success": False, "message": "Clip not available"}, status_code=404
|
||||
)
|
||||
|
||||
end_ts = datetime.now().timestamp() if event.end_time is None else event.end_time
|
||||
return recording_clip(request, event.camera, event.start_time, end_ts)
|
||||
end_ts = (
|
||||
datetime.now().timestamp()
|
||||
if event.end_time is None
|
||||
else event.end_time + padding
|
||||
)
|
||||
return recording_clip(request, event.camera, event.start_time - padding, end_ts)
|
||||
|
||||
|
||||
@router.get("/events/{event_id}/preview.gif")
|
||||
|
@ -61,6 +61,7 @@ class FfmpegConfig(FrigateBaseModel):
|
||||
retry_interval: float = Field(
|
||||
default=10.0,
|
||||
title="Time in seconds to wait before FFmpeg retries connecting to the camera.",
|
||||
gt=0.0,
|
||||
)
|
||||
apple_compatibility: bool = Field(
|
||||
default=False,
|
||||
|
@ -158,6 +158,9 @@ class ModelConfig(BaseModel):
|
||||
self.input_pixel_format = model_info["pixelFormat"]
|
||||
self.model_type = model_info["type"]
|
||||
|
||||
if model_info.get("inputDataType"):
|
||||
self.input_dtype = model_info["inputDataType"]
|
||||
|
||||
# generate list of attribute labels
|
||||
self.attributes_map = {
|
||||
**model_info.get("attributes", DEFAULT_ATTRIBUTE_LABEL_MAP),
|
||||
|
@ -182,10 +182,15 @@ Rules:
|
||||
event: Event,
|
||||
) -> Optional[str]:
|
||||
"""Generate a description for the frame."""
|
||||
prompt = camera_config.objects.genai.object_prompts.get(
|
||||
try:
|
||||
prompt = camera_config.genai.object_prompts.get(
|
||||
event.label,
|
||||
camera_config.objects.genai.prompt,
|
||||
camera_config.genai.prompt,
|
||||
).format(**model_to_dict(event))
|
||||
except KeyError as e:
|
||||
logger.error(f"Invalid key in GenAI prompt: {e}")
|
||||
return None
|
||||
|
||||
logger.debug(f"Sending images to genai provider with prompt: {prompt}")
|
||||
return self._send(prompt, thumbnails)
|
||||
|
||||
|
@ -372,12 +372,13 @@ class PtzAutoTracker:
|
||||
logger.info(f"Camera calibration for {camera} in progress")
|
||||
|
||||
# zoom levels test
|
||||
self.zoom_time[camera] = 0
|
||||
|
||||
if (
|
||||
self.config.cameras[camera].onvif.autotracking.zooming
|
||||
!= ZoomingModeEnum.disabled
|
||||
):
|
||||
logger.info(f"Calibration for {camera} in progress: 0% complete")
|
||||
self.zoom_time[camera] = 0
|
||||
|
||||
for i in range(2):
|
||||
# absolute move to 0 - fully zoomed out
|
||||
@ -1332,7 +1333,11 @@ class PtzAutoTracker:
|
||||
|
||||
if camera_config.onvif.autotracking.enabled:
|
||||
if not self.autotracker_init[camera]:
|
||||
self._autotracker_setup(camera_config, camera)
|
||||
future = asyncio.run_coroutine_threadsafe(
|
||||
self._autotracker_setup(camera_config, camera), self.onvif.loop
|
||||
)
|
||||
# Wait for the coroutine to complete
|
||||
future.result()
|
||||
|
||||
if self.calibrating[camera]:
|
||||
logger.debug(f"{camera}: Calibrating camera")
|
||||
@ -1479,7 +1484,8 @@ class PtzAutoTracker:
|
||||
self.tracked_object[camera] = None
|
||||
self.tracked_object_history[camera].clear()
|
||||
|
||||
self.ptz_metrics[camera].motor_stopped.wait()
|
||||
while not self.ptz_metrics[camera].motor_stopped.is_set():
|
||||
await self.onvif.get_camera_status(camera)
|
||||
logger.debug(
|
||||
f"{camera}: Time is {self.ptz_metrics[camera].frame_time.value}, returning to preset: {autotracker_config.return_preset}"
|
||||
)
|
||||
@ -1489,7 +1495,7 @@ class PtzAutoTracker:
|
||||
)
|
||||
|
||||
# update stored zoom level from preset
|
||||
if not self.ptz_metrics[camera].motor_stopped.is_set():
|
||||
while not self.ptz_metrics[camera].motor_stopped.is_set():
|
||||
await self.onvif.get_camera_status(camera)
|
||||
|
||||
self.ptz_metrics[camera].tracking_active.clear()
|
||||
|
@ -50,6 +50,8 @@ class OnvifController:
|
||||
self.config = config
|
||||
self.ptz_metrics = ptz_metrics
|
||||
|
||||
self.status_locks: dict[str, asyncio.Lock] = {}
|
||||
|
||||
# Create a dedicated event loop and run it in a separate thread
|
||||
self.loop = asyncio.new_event_loop()
|
||||
self.loop_thread = threading.Thread(target=self._run_event_loop, daemon=True)
|
||||
@ -61,6 +63,7 @@ class OnvifController:
|
||||
continue
|
||||
if cam.onvif.host:
|
||||
self.camera_configs[cam_name] = cam
|
||||
self.status_locks[cam_name] = asyncio.Lock()
|
||||
|
||||
asyncio.run_coroutine_threadsafe(self._init_cameras(), self.loop)
|
||||
|
||||
@ -827,6 +830,7 @@ class OnvifController:
|
||||
return False
|
||||
|
||||
async def get_camera_status(self, camera_name: str) -> None:
|
||||
async with self.status_locks[camera_name]:
|
||||
if camera_name not in self.cams.keys():
|
||||
logger.error(f"ONVIF is not configured for {camera_name}")
|
||||
return
|
||||
@ -865,7 +869,9 @@ class OnvifController:
|
||||
f"{camera_name}: Pan/tilt status: {pan_tilt_status}, Zoom status: {zoom_status}"
|
||||
)
|
||||
|
||||
if pan_tilt_status == "IDLE" and (zoom_status is None or zoom_status == "IDLE"):
|
||||
if pan_tilt_status == "IDLE" and (
|
||||
zoom_status is None or zoom_status == "IDLE"
|
||||
):
|
||||
self.cams[camera_name]["active"] = False
|
||||
if not self.ptz_metrics[camera_name].motor_stopped.is_set():
|
||||
self.ptz_metrics[camera_name].motor_stopped.set()
|
||||
@ -924,7 +930,9 @@ class OnvifController:
|
||||
self.ptz_metrics[camera_name].stop_time.value = self.ptz_metrics[
|
||||
camera_name
|
||||
].frame_time.value
|
||||
logger.warning(f"Camera {camera_name} is still in ONVIF 'MOVING' status.")
|
||||
logger.warning(
|
||||
f"Camera {camera_name} is still in ONVIF 'MOVING' status."
|
||||
)
|
||||
|
||||
def close(self) -> None:
|
||||
"""Gracefully shut down the ONVIF controller."""
|
||||
|
@ -346,7 +346,9 @@ export default function GeneralSettings({ className }: GeneralSettingsProps) {
|
||||
<Portal>
|
||||
<SubItemContent
|
||||
className={
|
||||
isDesktop ? "" : "w-[92%] rounded-lg md:rounded-2xl"
|
||||
isDesktop
|
||||
? ""
|
||||
: "scrollbar-container max-h-[75dvh] w-[92%] overflow-y-scroll rounded-lg md:rounded-2xl"
|
||||
}
|
||||
>
|
||||
<span tabIndex={0} className="sr-only" />
|
||||
|
@ -433,6 +433,7 @@ function CustomTimeSelector({
|
||||
className={`mt-3 flex items-center rounded-lg bg-secondary text-secondary-foreground ${isDesktop ? "mx-8 gap-2 px-2" : "pl-2"}`}
|
||||
>
|
||||
<FaCalendarAlt />
|
||||
<div className="flex flex-wrap items-center">
|
||||
<Popover
|
||||
open={startOpen}
|
||||
onOpenChange={(open) => {
|
||||
@ -565,6 +566,7 @@ function CustomTimeSelector({
|
||||
</PopoverContent>
|
||||
</Popover>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
|
@ -1,4 +1,10 @@
|
||||
import React, { useState, useRef, useEffect, useCallback } from "react";
|
||||
import React, {
|
||||
useState,
|
||||
useRef,
|
||||
useEffect,
|
||||
useCallback,
|
||||
useMemo,
|
||||
} from "react";
|
||||
import { useVideoDimensions } from "@/hooks/use-video-dimensions";
|
||||
import HlsVideoPlayer from "./HlsVideoPlayer";
|
||||
import ActivityIndicator from "../indicators/activity-indicator";
|
||||
@ -89,6 +95,12 @@ export function GenericVideoPlayer({
|
||||
},
|
||||
);
|
||||
|
||||
const hlsSource = useMemo(() => {
|
||||
return {
|
||||
playlist: source,
|
||||
};
|
||||
}, [source]);
|
||||
|
||||
return (
|
||||
<div ref={containerRef} className="relative flex h-full w-full flex-col">
|
||||
<div className="relative flex flex-grow items-center justify-center">
|
||||
@ -107,9 +119,7 @@ export function GenericVideoPlayer({
|
||||
>
|
||||
<HlsVideoPlayer
|
||||
videoRef={videoRef}
|
||||
currentSource={{
|
||||
playlist: source,
|
||||
}}
|
||||
currentSource={hlsSource}
|
||||
hotKeys
|
||||
visible
|
||||
frigateControls={false}
|
||||
|
@ -123,13 +123,6 @@ export default function HlsVideoPlayer({
|
||||
return;
|
||||
}
|
||||
|
||||
// we must destroy the hlsRef every time the source changes
|
||||
// so that we can create a new HLS instance with startPosition
|
||||
// set at the optimal point in time
|
||||
if (hlsRef.current) {
|
||||
hlsRef.current.destroy();
|
||||
}
|
||||
|
||||
hlsRef.current = new Hls({
|
||||
maxBufferLength: 10,
|
||||
maxBufferSize: 20 * 1000 * 1000,
|
||||
@ -138,6 +131,15 @@ export default function HlsVideoPlayer({
|
||||
hlsRef.current.attachMedia(videoRef.current);
|
||||
hlsRef.current.loadSource(currentSource.playlist);
|
||||
videoRef.current.playbackRate = currentPlaybackRate;
|
||||
|
||||
return () => {
|
||||
// we must destroy the hlsRef every time the source changes
|
||||
// so that we can create a new HLS instance with startPosition
|
||||
// set at the optimal point in time
|
||||
if (hlsRef.current) {
|
||||
hlsRef.current.destroy();
|
||||
}
|
||||
}
|
||||
}, [videoRef, hlsRef, useHlsCompat, currentSource]);
|
||||
|
||||
// state handling
|
||||
|
@ -164,7 +164,7 @@ export default function JSMpegPlayer({
|
||||
statsIntervalRef.current = setInterval(() => {
|
||||
const currentTimestamp = Date.now();
|
||||
const timeDiff = (currentTimestamp - lastTimestampRef.current) / 1000; // in seconds
|
||||
const bitrate = (bytesReceivedRef.current * 8) / timeDiff / 1000; // in kbps
|
||||
const bitrate = bytesReceivedRef.current / timeDiff / 1000; // in kBps
|
||||
|
||||
setStats?.({
|
||||
streamType: "jsmpeg",
|
||||
|
@ -82,7 +82,7 @@ export default function LivePlayer({
|
||||
|
||||
const [stats, setStats] = useState<PlayerStatsType>({
|
||||
streamType: "-",
|
||||
bandwidth: 0, // in kbps
|
||||
bandwidth: 0, // in kBps
|
||||
latency: undefined, // in seconds
|
||||
totalFrames: 0,
|
||||
droppedFrames: undefined,
|
||||
|
@ -338,7 +338,7 @@ function MSEPlayer({
|
||||
// console.debug("VideoRTC.buffer", b.byteLength, bufLen);
|
||||
} else {
|
||||
try {
|
||||
sb?.appendBuffer(data);
|
||||
sb?.appendBuffer(data as ArrayBuffer);
|
||||
} catch (e) {
|
||||
// no-op
|
||||
}
|
||||
@ -592,7 +592,7 @@ function MSEPlayer({
|
||||
const now = Date.now();
|
||||
const bytesLoaded = totalBytesLoaded.current;
|
||||
const timeElapsed = (now - lastTimestamp) / 1000; // seconds
|
||||
const bandwidth = (bytesLoaded - lastLoadedBytes) / timeElapsed / 1024; // kbps
|
||||
const bandwidth = (bytesLoaded - lastLoadedBytes) / timeElapsed / 1000; // kBps
|
||||
|
||||
lastLoadedBytes = bytesLoaded;
|
||||
lastTimestamp = now;
|
||||
|
@ -17,7 +17,7 @@ export function PlayerStats({ stats, minimal }: PlayerStatsProps) {
|
||||
</p>
|
||||
<p>
|
||||
<span className="text-white/70">{t("stats.bandwidth.title")}</span>{" "}
|
||||
<span className="text-white">{stats.bandwidth.toFixed(2)} kbps</span>
|
||||
<span className="text-white">{stats.bandwidth.toFixed(2)} kBps</span>
|
||||
</p>
|
||||
{stats.latency != undefined && (
|
||||
<p>
|
||||
@ -66,7 +66,7 @@ export function PlayerStats({ stats, minimal }: PlayerStatsProps) {
|
||||
</div>
|
||||
<div className="flex flex-col items-center gap-1">
|
||||
<span className="text-white/70">{t("stats.bandwidth.short")}</span>{" "}
|
||||
<span className="text-white">{stats.bandwidth.toFixed(2)} kbps</span>
|
||||
<span className="text-white">{stats.bandwidth.toFixed(2)} kBps</span>
|
||||
</div>
|
||||
{stats.latency != undefined && (
|
||||
<div className="hidden flex-col items-center gap-1 md:flex">
|
||||
|
@ -266,7 +266,7 @@ export default function WebRtcPlayer({
|
||||
const bitrate =
|
||||
timeDiff > 0
|
||||
? (bytesReceived - lastBytesReceived) / timeDiff / 1000
|
||||
: 0; // in kbps
|
||||
: 0; // in kBps
|
||||
|
||||
setStats?.({
|
||||
streamType: "WebRTC",
|
||||
|
@ -1,5 +1,5 @@
|
||||
import { CameraConfig, FrigateConfig } from "@/types/frigateConfig";
|
||||
import { useCallback, useEffect, useState } from "react";
|
||||
import { useCallback, useEffect, useState, useMemo } from "react";
|
||||
import useSWR from "swr";
|
||||
import { LivePlayerMode, LiveStreamMetadata } from "@/types/live";
|
||||
|
||||
@ -8,9 +8,54 @@ export default function useCameraLiveMode(
|
||||
windowVisible: boolean,
|
||||
) {
|
||||
const { data: config } = useSWR<FrigateConfig>("config");
|
||||
const { data: allStreamMetadata } = useSWR<{
|
||||
|
||||
// Get comma-separated list of restreamed stream names for SWR key
|
||||
const restreamedStreamsKey = useMemo(() => {
|
||||
if (!cameras || !config) return null;
|
||||
|
||||
const streamNames = new Set<string>();
|
||||
cameras.forEach((camera) => {
|
||||
const isRestreamed = Object.keys(config.go2rtc.streams || {}).includes(
|
||||
Object.values(camera.live.streams)[0],
|
||||
);
|
||||
|
||||
if (isRestreamed) {
|
||||
Object.values(camera.live.streams).forEach((streamName) => {
|
||||
streamNames.add(streamName);
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
return streamNames.size > 0
|
||||
? Array.from(streamNames).sort().join(",")
|
||||
: null;
|
||||
}, [cameras, config]);
|
||||
|
||||
const streamsFetcher = useCallback(async (key: string) => {
|
||||
const streamNames = key.split(",");
|
||||
const metadata: { [key: string]: LiveStreamMetadata } = {};
|
||||
|
||||
await Promise.all(
|
||||
streamNames.map(async (streamName) => {
|
||||
try {
|
||||
const response = await fetch(`/api/go2rtc/streams/${streamName}`);
|
||||
if (response.ok) {
|
||||
const data = await response.json();
|
||||
metadata[streamName] = data;
|
||||
}
|
||||
} catch (error) {
|
||||
// eslint-disable-next-line no-console
|
||||
console.error(`Failed to fetch metadata for ${streamName}:`, error);
|
||||
}
|
||||
}),
|
||||
);
|
||||
|
||||
return metadata;
|
||||
}, []);
|
||||
|
||||
const { data: allStreamMetadata = {} } = useSWR<{
|
||||
[key: string]: LiveStreamMetadata;
|
||||
}>(config ? "go2rtc/streams" : null, { revalidateOnFocus: false });
|
||||
}>(restreamedStreamsKey, streamsFetcher, { revalidateOnFocus: false });
|
||||
|
||||
const [preferredLiveModes, setPreferredLiveModes] = useState<{
|
||||
[key: string]: LivePlayerMode;
|
||||
|
@ -17,7 +17,7 @@ export function useVideoDimensions(
|
||||
});
|
||||
|
||||
const videoAspectRatio = useMemo(() => {
|
||||
return videoResolution.width / videoResolution.height;
|
||||
return videoResolution.width / videoResolution.height || 16 / 9;
|
||||
}, [videoResolution]);
|
||||
|
||||
const containerAspectRatio = useMemo(() => {
|
||||
@ -25,8 +25,8 @@ export function useVideoDimensions(
|
||||
}, [containerWidth, containerHeight]);
|
||||
|
||||
const videoDimensions = useMemo(() => {
|
||||
if (!containerWidth || !containerHeight || !videoAspectRatio)
|
||||
return { width: "100%", height: "100%" };
|
||||
if (!containerWidth || !containerHeight)
|
||||
return { aspectRatio: "16 / 9", width: "100%" };
|
||||
if (containerAspectRatio > videoAspectRatio) {
|
||||
const height = containerHeight;
|
||||
const width = height * videoAspectRatio;
|
||||
|
@ -76,7 +76,11 @@ export default function Settings() {
|
||||
|
||||
const isAdmin = useIsAdmin();
|
||||
|
||||
const allowedViewsForViewer: SettingsType[] = ["ui", "debug"];
|
||||
const allowedViewsForViewer: SettingsType[] = [
|
||||
"ui",
|
||||
"debug",
|
||||
"notifications",
|
||||
];
|
||||
const visibleSettingsViews = !isAdmin
|
||||
? allowedViewsForViewer
|
||||
: allSettingsViews;
|
||||
@ -167,7 +171,7 @@ export default function Settings() {
|
||||
useSearchEffect("page", (page: string) => {
|
||||
if (allSettingsViews.includes(page as SettingsType)) {
|
||||
// Restrict viewer to UI settings
|
||||
if (!isAdmin && !["ui", "debug"].includes(page)) {
|
||||
if (!isAdmin && !allowedViewsForViewer.includes(page as SettingsType)) {
|
||||
setPage("ui");
|
||||
} else {
|
||||
setPage(page as SettingsType);
|
||||
@ -203,7 +207,7 @@ export default function Settings() {
|
||||
onValueChange={(value: SettingsType) => {
|
||||
if (value) {
|
||||
// Restrict viewer navigation
|
||||
if (!isAdmin && !["ui", "debug"].includes(value)) {
|
||||
if (!isAdmin && !allowedViewsForViewer.includes(value)) {
|
||||
setPageToggle("ui");
|
||||
} else {
|
||||
setPageToggle(value);
|
||||
|
@ -46,6 +46,8 @@ import { Trans, useTranslation } from "react-i18next";
|
||||
import { useDateLocale } from "@/hooks/use-date-locale";
|
||||
import { useDocDomain } from "@/hooks/use-doc-domain";
|
||||
import { CameraNameLabel } from "@/components/camera/CameraNameLabel";
|
||||
import { useIsAdmin } from "@/hooks/use-is-admin";
|
||||
import { cn } from "@/lib/utils";
|
||||
|
||||
const NOTIFICATION_SERVICE_WORKER = "notifications-worker.js";
|
||||
|
||||
@ -64,6 +66,10 @@ export default function NotificationView({
|
||||
const { t } = useTranslation(["views/settings"]);
|
||||
const { getLocaleDocUrl } = useDocDomain();
|
||||
|
||||
// roles
|
||||
|
||||
const isAdmin = useIsAdmin();
|
||||
|
||||
const { data: config, mutate: updateConfig } = useSWR<FrigateConfig>(
|
||||
"config",
|
||||
{
|
||||
@ -380,7 +386,11 @@ export default function NotificationView({
|
||||
<div className="flex size-full flex-col md:flex-row">
|
||||
<Toaster position="top-center" closeButton={true} />
|
||||
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto rounded-lg border-[1px] border-secondary-foreground bg-background_alt p-2 md:order-none md:mb-0 md:mr-2 md:mt-0">
|
||||
<div className="grid w-full grid-cols-1 gap-4 md:grid-cols-2">
|
||||
<div
|
||||
className={cn(
|
||||
isAdmin && "grid w-full grid-cols-1 gap-4 md:grid-cols-2",
|
||||
)}
|
||||
>
|
||||
<div className="col-span-1">
|
||||
<Heading as="h3" className="my-2">
|
||||
{t("notification.notificationSettings.title")}
|
||||
@ -403,6 +413,7 @@ export default function NotificationView({
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{isAdmin && (
|
||||
<Form {...form}>
|
||||
<form
|
||||
onSubmit={form.handleSubmit(onSubmit)}
|
||||
@ -466,7 +477,9 @@ export default function NotificationView({
|
||||
key={camera.name}
|
||||
label={camera.name}
|
||||
isCameraName={true}
|
||||
isChecked={field.value?.includes(camera.name)}
|
||||
isChecked={field.value?.includes(
|
||||
camera.name,
|
||||
)}
|
||||
onCheckedChange={(checked) => {
|
||||
setChangedValue(true);
|
||||
let newCameras;
|
||||
@ -529,13 +542,23 @@ export default function NotificationView({
|
||||
</div>
|
||||
</form>
|
||||
</Form>
|
||||
)}
|
||||
</div>
|
||||
|
||||
<div className="col-span-1">
|
||||
<div className="mt-4 gap-2 space-y-6">
|
||||
<div className="flex flex-col gap-2 md:max-w-[50%]">
|
||||
<Separator className="my-2 flex bg-secondary md:hidden" />
|
||||
<Heading as="h4" className="my-2">
|
||||
<div
|
||||
className={cn(
|
||||
isAdmin && "flex flex-col gap-2 md:max-w-[50%]",
|
||||
)}
|
||||
>
|
||||
<Separator
|
||||
className={cn(
|
||||
"my-2 flex bg-secondary",
|
||||
isAdmin && "md:hidden",
|
||||
)}
|
||||
/>
|
||||
<Heading as="h4" className={cn(isAdmin ? "my-2" : "my-4")}>
|
||||
{t("notification.deviceSpecific")}
|
||||
</Heading>
|
||||
<Button
|
||||
@ -581,7 +604,7 @@ export default function NotificationView({
|
||||
? t("notification.unregisterDevice")
|
||||
: t("notification.registerDevice")}
|
||||
</Button>
|
||||
{registration != null && registration.active && (
|
||||
{isAdmin && registration != null && registration.active && (
|
||||
<Button
|
||||
aria-label={t("notification.sendTestNotification")}
|
||||
onClick={() => sendTestNotification("notification_test")}
|
||||
@ -591,7 +614,7 @@ export default function NotificationView({
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
{notificationCameras.length > 0 && (
|
||||
{isAdmin && notificationCameras.length > 0 && (
|
||||
<div className="mt-4 gap-2 space-y-6">
|
||||
<div className="space-y-3">
|
||||
<Separator className="my-2 flex bg-secondary" />
|
||||
|
Loading…
Reference in New Issue
Block a user