Dedicated LPR improvements (#17453)

* remove license plate from attributes for dedicated lpr cameras

* ensure we always have a color

* use frigate+ models with dedicated lpr cameras

* docs

* docs clarity

* docs enrichments

* use license_plate as object type
This commit is contained in:
Josh Hawkins 2025-03-30 08:43:24 -05:00 committed by GitHub
parent 2c1ded37a1
commit 2920127ada
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
6 changed files with 183 additions and 59 deletions

View File

@ -62,7 +62,7 @@ Fine-tune the LPR feature using these optional parameters:
- **`detection_threshold`**: License plate object detection confidence score required before recognition runs. - **`detection_threshold`**: License plate object detection confidence score required before recognition runs.
- Default: `0.7` - Default: `0.7`
- Note: This is field only applies to the standalone license plate detection model, `min_score` should be used to filter for models that have license plate detection built in. - Note: This is field only applies to the standalone license plate detection model, `threshold` and `min_score` object filters should be used for models like Frigate+ that have license plate detection built in.
- **`min_area`**: Defines the minimum area (in pixels) a license plate must be before recognition runs. - **`min_area`**: Defines the minimum area (in pixels) a license plate must be before recognition runs.
- Default: `1000` pixels. Note: this is intentionally set very low as it is an _area_ measurement (length x width). For reference, 1000 pixels represents a ~32x32 pixel square in your camera image. - Default: `1000` pixels. Note: this is intentionally set very low as it is an _area_ measurement (length x width). For reference, 1000 pixels represents a ~32x32 pixel square in your camera image.
- Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant plates. - Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant plates.
@ -137,17 +137,86 @@ lpr:
- "MN D3163" - "MN D3163"
``` ```
:::note
If you want to detect cars on cameras but don't want to use resources to run LPR on those cars, you should disable LPR for those specific cameras.
```yaml
cameras:
side_yard:
lpr:
enabled: False
...
```
:::
## Dedicated LPR Cameras ## Dedicated LPR Cameras
Dedicated LPR cameras are single-purpose cameras with powerful optical zoom to capture license plates on distant vehicles, often with fine-tuned settings to capture plates at night. Dedicated LPR cameras are single-purpose cameras with powerful optical zoom to capture license plates on distant vehicles, often with fine-tuned settings to capture plates at night.
Users with a dedicated LPR camera can run Frigate's LPR by specifying a camera type of `lpr` in the camera configuration. An example config for a dedicated LPR camera might look like this: Users can configure Frigate's LPR in two different ways depending on whether they are using a Frigate+ model:
### Using a Frigate+ Model
Users running a Frigate+ model (or any model that natively detects `license_plate`) can take advantage of `license_plate` detection. This allows license plates to be treated as standard objects in dedicated LPR mode, meaning that alerts, detections, snapshots, zones, and other Frigate features work as usual, and plates are detected efficiently through your configured object detector.
An example configuration for a dedicated LPR camera using a Frigate+ model:
```yaml
# LPR global configuration
lpr:
enabled: True
# Dedicated LPR camera configuration
cameras:
dedicated_lpr_camera:
type: "lpr" # required to use dedicated LPR camera mode
detect:
enabled: True
fps: 5 # increase if vehicles move quickly
min_initialized: 2 # set at fps divided by 3 for very fast cars
width: 1920
height: 1080
objects:
track:
- license_plate
filters:
license_plate:
threshold: 0.7
motion:
threshold: 30
contour_area: 60 # use an increased value to tune out small motion changes
improve_contrast: false
mask: 0.704,0.007,0.709,0.052,0.989,0.055,0.993,0.001 # ensure your camera's timestamp is masked
record:
enabled: True # disable recording if you only want snapshots
snapshots:
enabled: True
review:
detections:
labels:
- license_plate
```
With this setup:
- License plates are treated as normal objects in Frigate.
- Scores, alerts, detections, snapshots, zones, and object masks work as expected.
- Snapshots will have license plate bounding boxes on them.
- The `frigate/events` MQTT topic will publish tracked object updates.
- Debug view will display `license_plate` bounding boxes.
### Using the Secondary LPR Pipeline (Without Frigate+)
If you are not running a Frigate+ model, you can use Frigates built-in secondary dedicated LPR pipeline. In this mode, Frigate bypasses the standard object detection pipeline and runs a local license plate detector model on the full frame whenever motion activity occurs.
An example configuration for a dedicated LPR camera using the secondary pipeline:
```yaml ```yaml
# LPR global configuration # LPR global configuration
lpr: lpr:
enabled: True enabled: True
min_plate_length: 4
detection_threshold: 0.7 # change if necessary detection_threshold: 0.7 # change if necessary
# Dedicated LPR camera configuration # Dedicated LPR camera configuration
@ -156,14 +225,15 @@ cameras:
type: "lpr" # required to use dedicated LPR camera mode type: "lpr" # required to use dedicated LPR camera mode
lpr: lpr:
enabled: True enabled: True
expire_time: 3 # optional, default
enhancement: 3 # optional, enhance the image before trying to recognize characters enhancement: 3 # optional, enhance the image before trying to recognize characters
ffmpeg: ... ffmpeg: ...
detect: detect:
enabled: False # optional, disable Frigate's standard object detection pipeline enabled: False # disable Frigate's standard object detection pipeline
fps: 5 # keep this at 5, higher values are unnecessary for dedicated LPR mode and could overwhelm the detector fps: 5 # increase if necessary, though high values may slow down Frigate's enrichments pipeline and use considerable CPU
width: 1920 width: 1920
height: 1080 height: 1080
objects:
track: [] # required when not using a Frigate+ model for dedicated LPR mode
motion: motion:
threshold: 30 threshold: 30
contour_area: 60 # use an increased value here to tune out small motion changes contour_area: 60 # use an increased value here to tune out small motion changes
@ -178,31 +248,38 @@ cameras:
default: 7 default: 7
``` ```
The camera-level `type` setting tells Frigate to treat your camera as a dedicated LPR camera. Setting this option bypasses Frigate's standard object detection pipeline so that a `car` does not need to be detected to run LPR. This dedicated LPR pipeline does not utilize defined zones or object masks, and the license plate detector is always run on the full frame whenever motion activity occurs. If a plate is found, a snapshot at the highest scoring moment is saved as a `car` object, visible in Explore and searchable by the recognized plate via Explore's More Filters. With this setup:
An optional config variable for dedicated LPR cameras only, `expire_time`, can be specified under the `lpr` configuration at the camera level to change the time it takes for Frigate to consider a previously tracked plate as expired.
:::note
When using `type: "lpr"` for a camera, a non-standard object detection pipeline is used. Any detected license plates on dedicated LPR cameras are treated similarly to manual events in Frigate. Note that for `car` objects with license plates:
- The standard object detection pipeline is bypassed. Any detected license plates on dedicated LPR cameras are treated similarly to manual events in Frigate. You must **not** specify `license_plate` as an object to track.
- The license plate detector runs on the full frame whenever motion is detected and processes frames according to your detect `fps` setting.
- Review items will always be classified as a `detection`. - Review items will always be classified as a `detection`.
- Snapshots will always be saved. - Snapshots will always be saved.
- Tracked objects are retained according to your retain settings for `record` and `snapshots`. - Zones and object masks are **not** used.
- Zones and object masks cannot be used. - The `frigate/events` MQTT topic will **not** publish tracked object updates, though `frigate/reviews` will if recordings are enabled.
- Debug view may not show `license_plate` bounding boxes, even if you are using a Frigate+ model for your standard object detection pipeline. - License plate snapshots are saved at the highest-scoring moment and appear in Explore.
- The `frigate/events` MQTT topic will not publish tracked object updates, though `frigate/reviews` will if recordings are enabled. - Debug view will not show `license_plate` bounding boxes.
::: ### Summary
| Feature | Native `license_plate` detecting Model (like Frigate+) | Secondary Pipeline (without native model or Frigate+) |
| ----------------------- | ------------------------------------------------------ | --------------------------------------------------------------- |
| License Plate Detection | Uses `license_plate` as a tracked object | Runs a dedicated LPR pipeline |
| FPS Setting | 5 (increase for fast-moving cars) | 5 (increase for fast-moving cars, but it may use much more CPU) |
| Object Detection | Standard Frigate+ detection applies | Bypasses standard object detection |
| Zones & Object Masks | Supported | Not supported |
| Debug View | May show `license_plate` bounding boxes | May **not** show `license_plate` bounding boxes |
| MQTT `frigate/events` | Publishes tracked object updates | Does **not** publish tracked object updates |
| Explore | Recognized plates available in More Filters | Recognized plates available in More Filters |
By selecting the appropriate configuration, users can optimize their dedicated LPR cameras based on whether they are using a Frigate+ model or the secondary LPR pipeline.
### Best practices for using Dedicated LPR camera mode ### Best practices for using Dedicated LPR camera mode
- Tune your motion detection and increase the `contour_area` until you see only larger motion boxes being created as cars pass through the frame (likely somewhere between 50-90 for a 1920x1080 detect stream). Increasing the `contour_area` filters out small areas of motion and will prevent excessive resource use from looking for license plates in frames that don't even have a car passing through it. - Tune your motion detection and increase the `contour_area` until you see only larger motion boxes being created as cars pass through the frame (likely somewhere between 50-90 for a 1920x1080 detect stream). Increasing the `contour_area` filters out small areas of motion and will prevent excessive resource use from looking for license plates in frames that don't even have a car passing through it.
- Disable the `improve_contrast` motion setting, especially if you are running LPR at night and the frame is mostly dark. This will prevent small pixel changes and smaller areas of motion from triggering license plate detection. - Disable the `improve_contrast` motion setting, especially if you are running LPR at night and the frame is mostly dark. This will prevent small pixel changes and smaller areas of motion from triggering license plate detection.
- Ensure your camera's timestamp is covered with a motion mask so that it's not incorrectly detected as a license plate. - Ensure your camera's timestamp is covered with a motion mask so that it's not incorrectly detected as a license plate.
- While not strictly required, it may be beneficial to disable standard object detection on your dedicated LPR camera (`detect` --> `enabled: False`). If you've set the camera type to `"lpr"`, license plate detection will still be performed on the entire frame when motion occurs. - For non-Frigate+ users, you may need to change your camera settings for a clearer image or decrease your global `recognition_threshold` config if your plates are not being accurately recognized at night.
- If multiple tracked objects are being produced for the same license plate, you can tweak the `expire_time` to prevent plates from being expired from the view as quickly. - The secondary pipeline mode runs a local AI model on your CPU to detect plates. Increasing detect `fps` will increase CPU usage proportionally.
- You may need to change your camera settings for a clearer image or decrease your global `recognition_threshold` config if your plates are not being accurately recognized at night.
## FAQ ## FAQ

View File

@ -88,7 +88,9 @@ class CameraState:
thickness = 1 thickness = 1
else: else:
thickness = 2 thickness = 2
color = self.config.model.colormap[obj["label"]] color = self.config.model.colormap.get(
obj["label"], (255, 255, 255)
)
else: else:
thickness = 1 thickness = 1
color = (255, 0, 0) color = (255, 0, 0)
@ -110,7 +112,9 @@ class CameraState:
and obj["frame_time"] == frame_time and obj["frame_time"] == frame_time
): ):
thickness = 5 thickness = 5
color = self.config.model.colormap[obj["label"]] color = self.config.model.colormap.get(
obj["label"], (255, 255, 255)
)
# debug autotracking zooming - show the zoom factor box # debug autotracking zooming - show the zoom factor box
if ( if (

View File

@ -21,7 +21,6 @@ from frigate.comms.event_metadata_updater import (
EventMetadataPublisher, EventMetadataPublisher,
EventMetadataTypeEnum, EventMetadataTypeEnum,
) )
from frigate.config.camera.camera import CameraTypeEnum
from frigate.const import CLIPS_DIR from frigate.const import CLIPS_DIR
from frigate.embeddings.onnx.lpr_embedding import LPR_EMBEDDING_SIZE from frigate.embeddings.onnx.lpr_embedding import LPR_EMBEDDING_SIZE
from frigate.util.builtin import EventsPerSecond from frigate.util.builtin import EventsPerSecond
@ -972,7 +971,7 @@ class LicensePlateProcessingMixin:
( (
now, now,
camera, camera,
"car", "license_plate",
event_id, event_id,
True, True,
plate_score, plate_score,
@ -994,9 +993,7 @@ class LicensePlateProcessingMixin:
if not self.config.cameras[camera].lpr.enabled: if not self.config.cameras[camera].lpr.enabled:
return return
if not dedicated_lpr and self.config.cameras[camera].type == CameraTypeEnum.lpr: # dedicated LPR cam without frigate+
return
if dedicated_lpr: if dedicated_lpr:
id = "dedicated-lpr" id = "dedicated-lpr"
@ -1050,8 +1047,11 @@ class LicensePlateProcessingMixin:
else: else:
id = obj_data["id"] id = obj_data["id"]
# don't run for non car objects # don't run for non car or non license plate (dedicated lpr with frigate+) objects
if obj_data.get("label") != "car": if (
obj_data.get("label") != "car"
and obj_data.get("label") != "license_plate"
):
logger.debug( logger.debug(
f"{camera}: Not a processing license plate for non car object." f"{camera}: Not a processing license plate for non car object."
) )
@ -1131,26 +1131,34 @@ class LicensePlateProcessingMixin:
license_plate[0] : license_plate[2], license_plate[0] : license_plate[2],
] ]
else: else:
# don't run for object without attributes # don't run for object without attributes if this isn't dedicated lpr with frigate+
if not obj_data.get("current_attributes"): if (
not obj_data.get("current_attributes")
and obj_data.get("label") != "license_plate"
):
logger.debug(f"{camera}: No attributes to parse.") logger.debug(f"{camera}: No attributes to parse.")
return return
attributes: list[dict[str, any]] = obj_data.get( if obj_data.get("label") == "car":
"current_attributes", [] attributes: list[dict[str, any]] = obj_data.get(
) "current_attributes", []
for attr in attributes: )
if attr.get("label") != "license_plate": for attr in attributes:
continue if attr.get("label") != "license_plate":
continue
if license_plate is None or attr.get( if license_plate is None or attr.get(
"score", 0.0 "score", 0.0
) > license_plate.get("score", 0.0): ) > license_plate.get("score", 0.0):
license_plate = attr license_plate = attr
# no license plates detected in this frame # no license plates detected in this frame
if not license_plate: if not license_plate:
return return
# we are using dedicated lpr with frigate+
if obj_data.get("label") == "license_plate":
license_plate = obj_data
license_plate_box = license_plate.get("box") license_plate_box = license_plate.get("box")
@ -1160,7 +1168,9 @@ class LicensePlateProcessingMixin:
or area(license_plate_box) or area(license_plate_box)
< self.config.cameras[obj_data["camera"]].lpr.min_area < self.config.cameras[obj_data["camera"]].lpr.min_area
): ):
logger.debug(f"{camera}: Invalid license plate box {license_plate}") logger.debug(
f"{camera}: Area for license plate box {area(license_plate_box)} is less than min_area {self.config.cameras[obj_data['camera']].lpr.min_area}"
)
return return
license_plate_frame = cv2.cvtColor(frame, cv2.COLOR_YUV2BGR_I420) license_plate_frame = cv2.cvtColor(frame, cv2.COLOR_YUV2BGR_I420)
@ -1239,8 +1249,11 @@ class LicensePlateProcessingMixin:
) )
return return
# For LPR cameras, match or assign plate ID using Jaro-Winkler distance # For dedicated LPR cameras, match or assign plate ID using Jaro-Winkler distance
if dedicated_lpr: if (
dedicated_lpr
and "license_plate" not in self.config.cameras[camera].objects.track
):
plate_id = None plate_id = None
for existing_id, data in self.detected_license_plates.items(): for existing_id, data in self.detected_license_plates.items():
@ -1306,8 +1319,11 @@ class LicensePlateProcessingMixin:
(id, top_plate, avg_confidence), (id, top_plate, avg_confidence),
) )
if dedicated_lpr: # save the best snapshot for dedicated lpr cams not using frigate+
# save the best snapshot if (
dedicated_lpr
and "license_plate" not in self.config.cameras[camera].objects.track
):
logger.debug( logger.debug(
f"{camera}: Writing snapshot for {id}, {top_plate}, {current_time}" f"{camera}: Writing snapshot for {id}, {top_plate}, {current_time}"
) )

View File

@ -457,7 +457,11 @@ class EmbeddingMaintainer(threading.Thread):
camera_config = self.config.cameras[camera] camera_config = self.config.cameras[camera]
if not camera_config.type == CameraTypeEnum.lpr: if (
camera_config.type != CameraTypeEnum.lpr
or "license_plate" in camera_config.objects.track
):
# we're not a dedicated lpr camera or we are one but we're using frigate+
return return
try: try:

View File

@ -442,7 +442,7 @@ class TrackedObject:
if bounding_box: if bounding_box:
thickness = 2 thickness = 2
color = self.colormap[self.obj_data["label"]] color = self.colormap.get(self.obj_data["label"], (255, 255, 255))
# draw the bounding boxes on the frame # draw the bounding boxes on the frame
box = self.thumbnail_data["box"] box = self.thumbnail_data["box"]

View File

@ -15,6 +15,7 @@ from frigate.camera import CameraMetrics, PTZMetrics
from frigate.comms.config_updater import ConfigSubscriber from frigate.comms.config_updater import ConfigSubscriber
from frigate.comms.inter_process import InterProcessRequestor from frigate.comms.inter_process import InterProcessRequestor
from frigate.config import CameraConfig, DetectConfig, ModelConfig from frigate.config import CameraConfig, DetectConfig, ModelConfig
from frigate.config.camera.camera import CameraTypeEnum
from frigate.const import ( from frigate.const import (
CACHE_DIR, CACHE_DIR,
CACHE_SEGMENT_FORMAT, CACHE_SEGMENT_FORMAT,
@ -519,6 +520,7 @@ def track_camera(
frame_queue, frame_queue,
frame_shape, frame_shape,
model_config, model_config,
config,
config.detect, config.detect,
frame_manager, frame_manager,
motion_detector, motion_detector,
@ -585,6 +587,7 @@ def process_frames(
frame_queue: mp.Queue, frame_queue: mp.Queue,
frame_shape, frame_shape,
model_config: ModelConfig, model_config: ModelConfig,
camera_config: CameraConfig,
detect_config: DetectConfig, detect_config: DetectConfig,
frame_manager: FrameManager, frame_manager: FrameManager,
motion_detector: MotionDetector, motion_detector: MotionDetector,
@ -612,6 +615,29 @@ def process_frames(
region_min_size = get_min_region_size(model_config) region_min_size = get_min_region_size(model_config)
attributes_map = model_config.attributes_map
all_attributes = model_config.all_attributes
# remove license_plate from attributes if this camera is a dedicated LPR cam
if camera_config.type == CameraTypeEnum.lpr:
modified_attributes_map = model_config.attributes_map.copy()
if (
"car" in modified_attributes_map
and "license_plate" in modified_attributes_map["car"]
):
modified_attributes_map["car"] = [
attr
for attr in modified_attributes_map["car"]
if attr != "license_plate"
]
attributes_map = modified_attributes_map
all_attributes = [
attr for attr in model_config.all_attributes if attr != "license_plate"
]
while not stop_event.is_set(): while not stop_event.is_set():
_, updated_enabled_config = enabled_config_subscriber.check_for_update() _, updated_enabled_config = enabled_config_subscriber.check_for_update()
@ -805,9 +831,7 @@ def process_frames(
# if detection was run on this frame, consolidate # if detection was run on this frame, consolidate
if len(regions) > 0: if len(regions) > 0:
tracked_detections = [ tracked_detections = [
d d for d in consolidated_detections if d[0] not in all_attributes
for d in consolidated_detections
if d[0] not in model_config.all_attributes
] ]
# now that we have refined our detections, we need to track objects # now that we have refined our detections, we need to track objects
object_tracker.match_and_update( object_tracker.match_and_update(
@ -819,7 +843,7 @@ def process_frames(
# group the attribute detections based on what label they apply to # group the attribute detections based on what label they apply to
attribute_detections: dict[str, list[TrackedObjectAttribute]] = {} attribute_detections: dict[str, list[TrackedObjectAttribute]] = {}
for label, attribute_labels in model_config.attributes_map.items(): for label, attribute_labels in attributes_map.items():
attribute_detections[label] = [ attribute_detections[label] = [
TrackedObjectAttribute(d) TrackedObjectAttribute(d)
for d in consolidated_detections for d in consolidated_detections
@ -836,8 +860,7 @@ def process_frames(
for attributes in attribute_detections.values(): for attributes in attribute_detections.values():
for attribute in attributes: for attribute in attributes:
filtered_objects = filter( filtered_objects = filter(
lambda o: attribute.label lambda o: attribute.label in attributes_map.get(o["label"], []),
in model_config.attributes_map.get(o["label"], []),
all_objects, all_objects,
) )
selected_object_id = attribute.find_best_object(filtered_objects) selected_object_id = attribute.find_best_object(filtered_objects)
@ -885,7 +908,7 @@ def process_frames(
for obj in object_tracker.tracked_objects.values(): for obj in object_tracker.tracked_objects.values():
if obj["frame_time"] == frame_time: if obj["frame_time"] == frame_time:
thickness = 2 thickness = 2
color = model_config.colormap[obj["label"]] color = model_config.colormap.get(obj["label"], (255, 255, 255))
else: else:
thickness = 1 thickness = 1
color = (255, 0, 0) color = (255, 0, 0)