mirror of
https://github.com/blakeblackshear/frigate.git
synced 2024-12-19 19:06:16 +01:00
Convert detectors to factory pattern, ability to set different model for each detector (#4635)
* refactor detectors * move create_detector and DetectorTypeEnum * fixed code formatting * add detector model config models * fix detector unit tests * adjust SharedMemory size to largest detector model shape * fix detector model config defaults * enable auto-discovery of detectors * simplify config * simplify config changes further * update detectors docs; detect detector configs dynamic * add suggested changes * remove custom detector doc * fix grammar, adjust device defaults
This commit is contained in:
parent
43c2761308
commit
420bcd7aa0
@ -3,11 +3,38 @@ id: detectors
|
||||
title: Detectors
|
||||
---
|
||||
|
||||
By default, Frigate will use a single CPU detector. If you have a Coral, you will need to configure your detector devices in the config file. When using multiple detectors, they run in dedicated processes, but pull from a common queue of requested detections across all cameras.
|
||||
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, and `openvino`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
|
||||
|
||||
Frigate supports `edgetpu` and `cpu` as detector types. The device value should be specified according to the [Documentation for the TensorFlow Lite Python API](https://coral.ai/docs/edgetpu/multiple-edgetpu/#using-the-tensorflow-lite-python-api).
|
||||
**Note**: There is not yet support for Nvidia GPUs to perform object detection with tensorflow. It can be used for ffmpeg decoding, but not object detection.
|
||||
|
||||
**Note**: There is no support for Nvidia GPUs to perform object detection with tensorflow. It can be used for ffmpeg decoding, but not object detection.
|
||||
## CPU Detector (not recommended)
|
||||
The CPU detector type runs a TensorFlow Lite model utilizing the CPU without hardware acceleration. It is recommended to use a hardware accelerated detector type instead for better performance. To configure a CPU based detector, set the `"type"` attribute to `"cpu"`.
|
||||
|
||||
The number of threads used by the interpreter can be specified using the `"num_threads"` attribute, and defaults to `3.`
|
||||
|
||||
A TensorFlow Lite model is provided in the container at `/cpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
cpu1:
|
||||
type: cpu
|
||||
num_threads: 3
|
||||
model:
|
||||
path: "/custom_model.tflite"
|
||||
cpu2:
|
||||
type: cpu
|
||||
num_threads: 3
|
||||
```
|
||||
|
||||
When using CPU detectors, you can add one CPU detector per camera. Adding more detectors than the number of cameras should not improve performance.
|
||||
|
||||
## Edge-TPU Detector
|
||||
|
||||
The EdgeTPU detector type runs a TensorFlow Lite model utilizing the Google Coral delegate for hardware acceleration. To configure an EdgeTPU detector, set the `"type"` attribute to `"edgetpu"`.
|
||||
|
||||
The EdgeTPU device can be specified using the `"device"` attribute according to the [Documentation for the TensorFlow Lite Python API](https://coral.ai/docs/edgetpu/multiple-edgetpu/#using-the-tensorflow-lite-python-api). If not set, the delegate will use the first device it finds.
|
||||
|
||||
A TensorFlow Lite model is provided in the container at `/edgetpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.
|
||||
|
||||
### Single USB Coral
|
||||
|
||||
@ -16,6 +43,8 @@ detectors:
|
||||
coral:
|
||||
type: edgetpu
|
||||
device: usb
|
||||
model:
|
||||
path: "/custom_model.tflite"
|
||||
```
|
||||
|
||||
### Multiple USB Corals
|
||||
@ -64,38 +93,33 @@ detectors:
|
||||
device: pci
|
||||
```
|
||||
|
||||
### CPU Detectors (not recommended)
|
||||
## OpenVINO Detector
|
||||
|
||||
The OpenVINO detector type runs an OpenVINO IR model on Intel CPU, GPU and VPU hardware. To configure an OpenVINO detector, set the `"type"` attribute to `"openvino"`.
|
||||
|
||||
The OpenVINO device to be used is specified using the `"device"` attribute according to the naming conventions in the [Device Documentation](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Working_with_devices.html). Other supported devices could be `AUTO`, `CPU`, `GPU`, `MYRIAD`, etc. If not specified, the default OpenVINO device will be selected by the `AUTO` plugin.
|
||||
|
||||
OpenVINO is supported on 6th Gen Intel platforms (Skylake) and newer. A supported Intel platform is required to use the `GPU` device with OpenVINO. The `MYRIAD` device may be run on any platform, including Arm devices. For detailed system requirements, see [OpenVINO System Requirements](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html)
|
||||
|
||||
An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobilenet_v2.xml` and is used by this detector type by default. The model comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model. Use the model configuration shown below when using the OpenVINO detector.
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
cpu1:
|
||||
type: cpu
|
||||
num_threads: 3
|
||||
cpu2:
|
||||
type: cpu
|
||||
num_threads: 3
|
||||
```
|
||||
|
||||
When using CPU detectors, you can add a CPU detector per camera. Adding more detectors than the number of cameras should not improve performance.
|
||||
|
||||
## OpenVINO
|
||||
|
||||
The OpenVINO detector allows Frigate to run an OpenVINO IR model on Intel CPU, GPU and VPU hardware.
|
||||
|
||||
### OpenVINO Devices
|
||||
|
||||
The OpenVINO detector supports the Intel-supplied device plugins and can specify one or more devices in the configuration. See OpenVINO's device naming conventions in the [Device Documentation](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Working_with_devices.html) for more detail. Other supported devices could be `AUTO`, `CPU`, `GPU`, `MYRIAD`, etc.
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
ov_detector:
|
||||
ov:
|
||||
type: openvino
|
||||
device: GPU
|
||||
device: AUTO
|
||||
model:
|
||||
path: /openvino-model/ssdlite_mobilenet_v2.xml
|
||||
|
||||
model:
|
||||
width: 300
|
||||
height: 300
|
||||
input_tensor: nhwc
|
||||
input_pixel_format: bgr
|
||||
labelmap_path: /openvino-model/coco_91cl_bkgr.txt
|
||||
```
|
||||
|
||||
OpenVINO is supported on 6th Gen Intel platforms (Skylake) and newer. A supported Intel platform is required to use the GPU device with OpenVINO. The `MYRIAD` device may be run on any platform, including Arm devices. For detailed system requirements, see [OpenVINO System Requirements](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html)
|
||||
|
||||
#### Intel NCS2 VPU and Myriad X Setup
|
||||
### Intel NCS2 VPU and Myriad X Setup
|
||||
|
||||
Intel produces a neural net inference accelleration chip called Myriad X. This chip was sold in their Neural Compute Stick 2 (NCS2) which has been discontinued. If intending to use the MYRIAD device for accelleration, additional setup is required to pass through the USB device. The host needs a udev rule installed to handle the NCS2 device.
|
||||
|
||||
@ -123,18 +147,3 @@ device_cgroup_rules:
|
||||
volumes:
|
||||
- /dev/bus/usb:/dev/bus/usb
|
||||
```
|
||||
|
||||
### OpenVINO Models
|
||||
|
||||
The included model for an OpenVINO detector comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model. Use the model configuration shown below when using the OpenVINO detector.
|
||||
|
||||
```yaml
|
||||
model:
|
||||
path: /openvino-model/ssdlite_mobilenet_v2.xml
|
||||
width: 300
|
||||
height: 300
|
||||
input_tensor: nhwc
|
||||
input_pixel_format: bgr
|
||||
labelmap_path: /openvino-model/coco_91cl_bkgr.txt
|
||||
|
||||
```
|
||||
|
@ -74,15 +74,13 @@ mqtt:
|
||||
# Optional: Detectors configuration. Defaults to a single CPU detector
|
||||
detectors:
|
||||
# Required: name of the detector
|
||||
coral:
|
||||
detector_name:
|
||||
# Required: type of the detector
|
||||
# Valid values are 'edgetpu' (requires device property below) `openvino` (see Detectors documentation), and 'cpu'.
|
||||
type: edgetpu
|
||||
# Optional: Edgetpu or OpenVino device name
|
||||
device: usb
|
||||
# Optional: num_threads value passed to the tflite.Interpreter (default: shown below)
|
||||
# This value is only used for CPU types
|
||||
num_threads: 3
|
||||
# Frigate provided types include 'cpu', 'edgetpu', and 'openvino' (default: shown below)
|
||||
# Additional detector types can also be plugged in.
|
||||
# Detectors may require additional configuration.
|
||||
# Refer to the Detectors configuration page for more information.
|
||||
type: cpu
|
||||
|
||||
# Optional: Database configuration
|
||||
database:
|
||||
|
@ -186,10 +186,16 @@ class FrigateApp:
|
||||
self.detection_out_events[name] = mp.Event()
|
||||
|
||||
try:
|
||||
largest_frame = max(
|
||||
[
|
||||
det.model.height * det.model.width * 3
|
||||
for (name, det) in self.config.detectors.items()
|
||||
]
|
||||
)
|
||||
shm_in = mp.shared_memory.SharedMemory(
|
||||
name=name,
|
||||
create=True,
|
||||
size=self.config.model.height * self.config.model.width * 3,
|
||||
size=largest_frame,
|
||||
)
|
||||
except FileExistsError:
|
||||
shm_in = mp.shared_memory.SharedMemory(name=name)
|
||||
@ -204,15 +210,12 @@ class FrigateApp:
|
||||
self.detection_shms.append(shm_in)
|
||||
self.detection_shms.append(shm_out)
|
||||
|
||||
for name, detector in self.config.detectors.items():
|
||||
for name, detector_config in self.config.detectors.items():
|
||||
self.detectors[name] = ObjectDetectProcess(
|
||||
name,
|
||||
self.detection_queue,
|
||||
self.detection_out_events,
|
||||
self.config.model,
|
||||
detector.type,
|
||||
detector.device,
|
||||
detector.num_threads,
|
||||
detector_config,
|
||||
)
|
||||
|
||||
def start_detected_frames_processor(self) -> None:
|
||||
|
@ -9,7 +9,7 @@ from typing import Dict, List, Optional, Tuple, Union
|
||||
import matplotlib.pyplot as plt
|
||||
import numpy as np
|
||||
import yaml
|
||||
from pydantic import BaseModel, Extra, Field, validator
|
||||
from pydantic import BaseModel, Extra, Field, validator, parse_obj_as
|
||||
from pydantic.fields import PrivateAttr
|
||||
|
||||
from frigate.const import (
|
||||
@ -32,8 +32,15 @@ from frigate.ffmpeg_presets import (
|
||||
parse_preset_output_record,
|
||||
parse_preset_output_rtmp,
|
||||
)
|
||||
from frigate.detectors import (
|
||||
PixelFormatEnum,
|
||||
InputTensorEnum,
|
||||
ModelConfig,
|
||||
DetectorConfig,
|
||||
)
|
||||
from frigate.version import VERSION
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# TODO: Identify what the default format to display timestamps is
|
||||
@ -52,18 +59,6 @@ class FrigateBaseModel(BaseModel):
|
||||
extra = Extra.forbid
|
||||
|
||||
|
||||
class DetectorTypeEnum(str, Enum):
|
||||
edgetpu = "edgetpu"
|
||||
openvino = "openvino"
|
||||
cpu = "cpu"
|
||||
|
||||
|
||||
class DetectorConfig(FrigateBaseModel):
|
||||
type: DetectorTypeEnum = Field(default=DetectorTypeEnum.cpu, title="Detector Type")
|
||||
device: str = Field(default="usb", title="Device Type")
|
||||
num_threads: int = Field(default=3, title="Number of detection threads")
|
||||
|
||||
|
||||
class UIConfig(FrigateBaseModel):
|
||||
use_experimental: bool = Field(default=False, title="Experimental UI")
|
||||
|
||||
@ -725,57 +720,6 @@ class DatabaseConfig(FrigateBaseModel):
|
||||
)
|
||||
|
||||
|
||||
class PixelFormatEnum(str, Enum):
|
||||
rgb = "rgb"
|
||||
bgr = "bgr"
|
||||
yuv = "yuv"
|
||||
|
||||
|
||||
class InputTensorEnum(str, Enum):
|
||||
nchw = "nchw"
|
||||
nhwc = "nhwc"
|
||||
|
||||
|
||||
class ModelConfig(FrigateBaseModel):
|
||||
path: Optional[str] = Field(title="Custom Object detection model path.")
|
||||
labelmap_path: Optional[str] = Field(title="Label map for custom object detector.")
|
||||
width: int = Field(default=320, title="Object detection model input width.")
|
||||
height: int = Field(default=320, title="Object detection model input height.")
|
||||
labelmap: Dict[int, str] = Field(
|
||||
default_factory=dict, title="Labelmap customization."
|
||||
)
|
||||
input_tensor: InputTensorEnum = Field(
|
||||
default=InputTensorEnum.nhwc, title="Model Input Tensor Shape"
|
||||
)
|
||||
input_pixel_format: PixelFormatEnum = Field(
|
||||
default=PixelFormatEnum.rgb, title="Model Input Pixel Color Format"
|
||||
)
|
||||
_merged_labelmap: Optional[Dict[int, str]] = PrivateAttr()
|
||||
_colormap: Dict[int, Tuple[int, int, int]] = PrivateAttr()
|
||||
|
||||
@property
|
||||
def merged_labelmap(self) -> Dict[int, str]:
|
||||
return self._merged_labelmap
|
||||
|
||||
@property
|
||||
def colormap(self) -> Dict[int, Tuple[int, int, int]]:
|
||||
return self._colormap
|
||||
|
||||
def __init__(self, **config):
|
||||
super().__init__(**config)
|
||||
|
||||
self._merged_labelmap = {
|
||||
**load_labels(config.get("labelmap_path", "/labelmap.txt")),
|
||||
**config.get("labelmap", {}),
|
||||
}
|
||||
|
||||
cmap = plt.cm.get_cmap("tab10", len(self._merged_labelmap.keys()))
|
||||
|
||||
self._colormap = {}
|
||||
for key, val in self._merged_labelmap.items():
|
||||
self._colormap[val] = tuple(int(round(255 * c)) for c in cmap(key)[:3])
|
||||
|
||||
|
||||
class LogLevelEnum(str, Enum):
|
||||
debug = "debug"
|
||||
info = "info"
|
||||
@ -890,7 +834,7 @@ class FrigateConfig(FrigateBaseModel):
|
||||
default_factory=ModelConfig, title="Detection model configuration."
|
||||
)
|
||||
detectors: Dict[str, DetectorConfig] = Field(
|
||||
default={name: DetectorConfig(**d) for name, d in DEFAULT_DETECTORS.items()},
|
||||
default=DEFAULT_DETECTORS,
|
||||
title="Detector hardware configuration.",
|
||||
)
|
||||
logger: LoggerConfig = Field(
|
||||
@ -1032,6 +976,33 @@ class FrigateConfig(FrigateBaseModel):
|
||||
# generate the ffmpeg commands
|
||||
camera_config.create_ffmpeg_cmds()
|
||||
config.cameras[name] = camera_config
|
||||
|
||||
for key, detector in config.detectors.items():
|
||||
detector_config: DetectorConfig = parse_obj_as(DetectorConfig, detector)
|
||||
if detector_config.model is None:
|
||||
detector_config.model = config.model
|
||||
else:
|
||||
model = detector_config.model
|
||||
schema = ModelConfig.schema()["properties"]
|
||||
if (
|
||||
model.width != schema["width"]["default"]
|
||||
or model.height != schema["height"]["default"]
|
||||
or model.labelmap_path is not None
|
||||
or model.labelmap is not {}
|
||||
or model.input_tensor != schema["input_tensor"]["default"]
|
||||
or model.input_pixel_format
|
||||
!= schema["input_pixel_format"]["default"]
|
||||
):
|
||||
logger.warning(
|
||||
"Customizing more than a detector model path is unsupported."
|
||||
)
|
||||
merged_model = deep_merge(
|
||||
detector_config.model.dict(exclude_unset=True),
|
||||
config.model.dict(exclude_unset=True),
|
||||
)
|
||||
detector_config.model = ModelConfig.parse_obj(merged_model)
|
||||
config.detectors[key] = detector_config
|
||||
|
||||
return config
|
||||
|
||||
@validator("cameras")
|
||||
|
@ -0,0 +1,24 @@
|
||||
import logging
|
||||
|
||||
from .detection_api import DetectionApi
|
||||
from .detector_config import (
|
||||
PixelFormatEnum,
|
||||
InputTensorEnum,
|
||||
ModelConfig,
|
||||
)
|
||||
from .detector_types import DetectorTypeEnum, api_types, DetectorConfig
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def create_detector(detector_config):
|
||||
if detector_config.type == DetectorTypeEnum.cpu:
|
||||
logger.warning(
|
||||
"CPU detectors are not recommended and should only be used for testing or for trial purposes."
|
||||
)
|
||||
|
||||
api = api_types.get(detector_config.type)
|
||||
if not api:
|
||||
raise ValueError(detector_config.type)
|
||||
return api(detector_config)
|
@ -1,15 +1,15 @@
|
||||
import logging
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Dict
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class DetectionApi(ABC):
|
||||
type_key: str
|
||||
|
||||
@abstractmethod
|
||||
def __init__(self, det_device=None, model_config=None):
|
||||
def __init__(self, detector_config):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
|
78
frigate/detectors/detector_config.py
Normal file
78
frigate/detectors/detector_config.py
Normal file
@ -0,0 +1,78 @@
|
||||
import logging
|
||||
from enum import Enum
|
||||
from typing import Dict, List, Optional, Tuple, Union, Literal
|
||||
|
||||
import matplotlib.pyplot as plt
|
||||
from pydantic import BaseModel, Extra, Field, validator
|
||||
from pydantic.fields import PrivateAttr
|
||||
|
||||
from frigate.util import load_labels
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class PixelFormatEnum(str, Enum):
|
||||
rgb = "rgb"
|
||||
bgr = "bgr"
|
||||
yuv = "yuv"
|
||||
|
||||
|
||||
class InputTensorEnum(str, Enum):
|
||||
nchw = "nchw"
|
||||
nhwc = "nhwc"
|
||||
|
||||
|
||||
class ModelConfig(BaseModel):
|
||||
path: Optional[str] = Field(title="Custom Object detection model path.")
|
||||
labelmap_path: Optional[str] = Field(title="Label map for custom object detector.")
|
||||
width: int = Field(default=320, title="Object detection model input width.")
|
||||
height: int = Field(default=320, title="Object detection model input height.")
|
||||
labelmap: Dict[int, str] = Field(
|
||||
default_factory=dict, title="Labelmap customization."
|
||||
)
|
||||
input_tensor: InputTensorEnum = Field(
|
||||
default=InputTensorEnum.nhwc, title="Model Input Tensor Shape"
|
||||
)
|
||||
input_pixel_format: PixelFormatEnum = Field(
|
||||
default=PixelFormatEnum.rgb, title="Model Input Pixel Color Format"
|
||||
)
|
||||
_merged_labelmap: Optional[Dict[int, str]] = PrivateAttr()
|
||||
_colormap: Dict[int, Tuple[int, int, int]] = PrivateAttr()
|
||||
|
||||
@property
|
||||
def merged_labelmap(self) -> Dict[int, str]:
|
||||
return self._merged_labelmap
|
||||
|
||||
@property
|
||||
def colormap(self) -> Dict[int, Tuple[int, int, int]]:
|
||||
return self._colormap
|
||||
|
||||
def __init__(self, **config):
|
||||
super().__init__(**config)
|
||||
|
||||
self._merged_labelmap = {
|
||||
**load_labels(config.get("labelmap_path", "/labelmap.txt")),
|
||||
**config.get("labelmap", {}),
|
||||
}
|
||||
|
||||
cmap = plt.cm.get_cmap("tab10", len(self._merged_labelmap.keys()))
|
||||
|
||||
self._colormap = {}
|
||||
for key, val in self._merged_labelmap.items():
|
||||
self._colormap[val] = tuple(int(round(255 * c)) for c in cmap(key)[:3])
|
||||
|
||||
class Config:
|
||||
extra = Extra.forbid
|
||||
|
||||
|
||||
class BaseDetectorConfig(BaseModel):
|
||||
# the type field must be defined in all subclasses
|
||||
type: str = Field(default="cpu", title="Detector Type")
|
||||
model: ModelConfig = Field(
|
||||
default=None, title="Detector specific model configuration."
|
||||
)
|
||||
|
||||
class Config:
|
||||
extra = Extra.allow
|
||||
arbitrary_types_allowed = True
|
35
frigate/detectors/detector_types.py
Normal file
35
frigate/detectors/detector_types.py
Normal file
@ -0,0 +1,35 @@
|
||||
import logging
|
||||
import importlib
|
||||
import pkgutil
|
||||
from typing import Union
|
||||
from typing_extensions import Annotated
|
||||
from enum import Enum
|
||||
from pydantic import Field
|
||||
|
||||
from . import plugins
|
||||
from .detection_api import DetectionApi
|
||||
from .detector_config import BaseDetectorConfig
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
plugin_modules = [
|
||||
importlib.import_module(name)
|
||||
for finder, name, ispkg in pkgutil.iter_modules(
|
||||
plugins.__path__, plugins.__name__ + "."
|
||||
)
|
||||
]
|
||||
|
||||
api_types = {det.type_key: det for det in DetectionApi.__subclasses__()}
|
||||
|
||||
|
||||
class StrEnum(str, Enum):
|
||||
pass
|
||||
|
||||
|
||||
DetectorTypeEnum = StrEnum("DetectorTypeEnum", {k: k for k in api_types})
|
||||
|
||||
DetectorConfig = Annotated[
|
||||
Union[tuple(BaseDetectorConfig.__subclasses__())],
|
||||
Field(discriminator="type"),
|
||||
]
|
0
frigate/detectors/plugins/__init__.py
Normal file
0
frigate/detectors/plugins/__init__.py
Normal file
@ -2,15 +2,29 @@ import logging
|
||||
import numpy as np
|
||||
|
||||
from frigate.detectors.detection_api import DetectionApi
|
||||
from frigate.detectors.detector_config import BaseDetectorConfig
|
||||
from typing import Literal
|
||||
from pydantic import Extra, Field
|
||||
import tflite_runtime.interpreter as tflite
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
DETECTOR_KEY = "cpu"
|
||||
|
||||
|
||||
class CpuDetectorConfig(BaseDetectorConfig):
|
||||
type: Literal[DETECTOR_KEY]
|
||||
num_threads: int = Field(default=3, title="Number of detection threads")
|
||||
|
||||
|
||||
class CpuTfl(DetectionApi):
|
||||
def __init__(self, det_device=None, model_config=None, num_threads=3):
|
||||
type_key = DETECTOR_KEY
|
||||
|
||||
def __init__(self, detector_config: CpuDetectorConfig):
|
||||
self.interpreter = tflite.Interpreter(
|
||||
model_path=model_config.path or "/cpu_model.tflite", num_threads=num_threads
|
||||
model_path=detector_config.model.path or "/cpu_model.tflite",
|
||||
num_threads=detector_config.num_threads or 3,
|
||||
)
|
||||
|
||||
self.interpreter.allocate_tensors()
|
@ -2,17 +2,30 @@ import logging
|
||||
import numpy as np
|
||||
|
||||
from frigate.detectors.detection_api import DetectionApi
|
||||
from frigate.detectors.detector_config import BaseDetectorConfig
|
||||
from typing import Literal
|
||||
from pydantic import Extra, Field
|
||||
import tflite_runtime.interpreter as tflite
|
||||
from tflite_runtime.interpreter import load_delegate
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
DETECTOR_KEY = "edgetpu"
|
||||
|
||||
|
||||
class EdgeTpuDetectorConfig(BaseDetectorConfig):
|
||||
type: Literal[DETECTOR_KEY]
|
||||
device: str = Field(default=None, title="Device Type")
|
||||
|
||||
|
||||
class EdgeTpuTfl(DetectionApi):
|
||||
def __init__(self, det_device=None, model_config=None):
|
||||
type_key = DETECTOR_KEY
|
||||
|
||||
def __init__(self, detector_config: EdgeTpuDetectorConfig):
|
||||
device_config = {"device": "usb"}
|
||||
if not det_device is None:
|
||||
device_config = {"device": det_device}
|
||||
if detector_config.device is not None:
|
||||
device_config = {"device": detector_config.device}
|
||||
|
||||
edge_tpu_delegate = None
|
||||
|
||||
@ -21,7 +34,7 @@ class EdgeTpuTfl(DetectionApi):
|
||||
edge_tpu_delegate = load_delegate("libedgetpu.so.1.0", device_config)
|
||||
logger.info("TPU found")
|
||||
self.interpreter = tflite.Interpreter(
|
||||
model_path=model_config.path or "/edgetpu_model.tflite",
|
||||
model_path=detector_config.model.path or "/edgetpu_model.tflite",
|
||||
experimental_delegates=[edge_tpu_delegate],
|
||||
)
|
||||
except ValueError:
|
@ -3,18 +3,30 @@ import numpy as np
|
||||
import openvino.runtime as ov
|
||||
|
||||
from frigate.detectors.detection_api import DetectionApi
|
||||
from frigate.detectors.detector_config import BaseDetectorConfig
|
||||
from typing import Literal
|
||||
from pydantic import Extra, Field
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
DETECTOR_KEY = "openvino"
|
||||
|
||||
|
||||
class OvDetectorConfig(BaseDetectorConfig):
|
||||
type: Literal[DETECTOR_KEY]
|
||||
device: str = Field(default=None, title="Device Type")
|
||||
|
||||
|
||||
class OvDetector(DetectionApi):
|
||||
def __init__(self, det_device=None, model_config=None, num_threads=1):
|
||||
type_key = DETECTOR_KEY
|
||||
|
||||
def __init__(self, detector_config: OvDetectorConfig):
|
||||
self.ov_core = ov.Core()
|
||||
self.ov_model = self.ov_core.read_model(model_config.path)
|
||||
self.ov_model = self.ov_core.read_model(detector_config.model.path)
|
||||
|
||||
self.interpreter = self.ov_core.compile_model(
|
||||
model=self.ov_model, device_name=det_device
|
||||
model=self.ov_model, device_name=detector_config.device
|
||||
)
|
||||
logger.info(f"Model Input Shape: {self.interpreter.input(0).shape}")
|
||||
self.output_indexes = 0
|
@ -10,10 +10,8 @@ from abc import ABC, abstractmethod
|
||||
import numpy as np
|
||||
from setproctitle import setproctitle
|
||||
|
||||
from frigate.config import DetectorTypeEnum, InputTensorEnum
|
||||
from frigate.detectors.edgetpu_tfl import EdgeTpuTfl
|
||||
from frigate.detectors.openvino import OvDetector
|
||||
from frigate.detectors.cpu_tfl import CpuTfl
|
||||
from frigate.config import InputTensorEnum
|
||||
from frigate.detectors import create_detector
|
||||
|
||||
from frigate.util import EventsPerSecond, SharedMemoryFrameManager, listen, load_labels
|
||||
|
||||
@ -37,10 +35,7 @@ def tensor_transform(desired_shape):
|
||||
class LocalObjectDetector(ObjectDetector):
|
||||
def __init__(
|
||||
self,
|
||||
det_type=DetectorTypeEnum.cpu,
|
||||
det_device=None,
|
||||
model_config=None,
|
||||
num_threads=3,
|
||||
detector_config=None,
|
||||
labels=None,
|
||||
):
|
||||
self.fps = EventsPerSecond()
|
||||
@ -49,24 +44,12 @@ class LocalObjectDetector(ObjectDetector):
|
||||
else:
|
||||
self.labels = load_labels(labels)
|
||||
|
||||
if model_config:
|
||||
self.input_transform = tensor_transform(model_config.input_tensor)
|
||||
if detector_config:
|
||||
self.input_transform = tensor_transform(detector_config.model.input_tensor)
|
||||
else:
|
||||
self.input_transform = None
|
||||
|
||||
if det_type == DetectorTypeEnum.edgetpu:
|
||||
self.detect_api = EdgeTpuTfl(
|
||||
det_device=det_device, model_config=model_config
|
||||
)
|
||||
elif det_type == DetectorTypeEnum.openvino:
|
||||
self.detect_api = OvDetector(
|
||||
det_device=det_device, model_config=model_config
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
"CPU detectors are not recommended and should only be used for testing or for trial purposes."
|
||||
)
|
||||
self.detect_api = CpuTfl(model_config=model_config, num_threads=num_threads)
|
||||
self.detect_api = create_detector(detector_config)
|
||||
|
||||
def detect(self, tensor_input, threshold=0.4):
|
||||
detections = []
|
||||
@ -94,10 +77,7 @@ def run_detector(
|
||||
out_events: dict[str, mp.Event],
|
||||
avg_speed,
|
||||
start,
|
||||
model_config,
|
||||
det_type,
|
||||
det_device,
|
||||
num_threads,
|
||||
detector_config,
|
||||
):
|
||||
threading.current_thread().name = f"detector:{name}"
|
||||
logger = logging.getLogger(f"detector.{name}")
|
||||
@ -114,12 +94,7 @@ def run_detector(
|
||||
signal.signal(signal.SIGINT, receiveSignal)
|
||||
|
||||
frame_manager = SharedMemoryFrameManager()
|
||||
object_detector = LocalObjectDetector(
|
||||
det_type=det_type,
|
||||
det_device=det_device,
|
||||
model_config=model_config,
|
||||
num_threads=num_threads,
|
||||
)
|
||||
object_detector = LocalObjectDetector(detector_config=detector_config)
|
||||
|
||||
outputs = {}
|
||||
for name in out_events.keys():
|
||||
@ -133,7 +108,8 @@ def run_detector(
|
||||
except queue.Empty:
|
||||
continue
|
||||
input_frame = frame_manager.get(
|
||||
connection_id, (1, model_config.height, model_config.width, 3)
|
||||
connection_id,
|
||||
(1, detector_config.model.height, detector_config.model.width, 3),
|
||||
)
|
||||
|
||||
if input_frame is None:
|
||||
@ -156,10 +132,7 @@ class ObjectDetectProcess:
|
||||
name,
|
||||
detection_queue,
|
||||
out_events,
|
||||
model_config,
|
||||
det_type=None,
|
||||
det_device=None,
|
||||
num_threads=3,
|
||||
detector_config,
|
||||
):
|
||||
self.name = name
|
||||
self.out_events = out_events
|
||||
@ -167,10 +140,7 @@ class ObjectDetectProcess:
|
||||
self.avg_inference_speed = mp.Value("d", 0.01)
|
||||
self.detection_start = mp.Value("d", 0.0)
|
||||
self.detect_process = None
|
||||
self.model_config = model_config
|
||||
self.det_type = det_type
|
||||
self.det_device = det_device
|
||||
self.num_threads = num_threads
|
||||
self.detector_config = detector_config
|
||||
self.start_or_restart()
|
||||
|
||||
def stop(self):
|
||||
@ -195,10 +165,7 @@ class ObjectDetectProcess:
|
||||
self.out_events,
|
||||
self.avg_inference_speed,
|
||||
self.detection_start,
|
||||
self.model_config,
|
||||
self.det_type,
|
||||
self.det_device,
|
||||
self.num_threads,
|
||||
self.detector_config,
|
||||
),
|
||||
)
|
||||
self.detect_process.daemon = True
|
||||
|
@ -5,9 +5,9 @@ from pydantic import ValidationError
|
||||
from frigate.config import (
|
||||
BirdseyeModeEnum,
|
||||
FrigateConfig,
|
||||
DetectorTypeEnum,
|
||||
)
|
||||
from frigate.util import load_config_with_no_duplicates
|
||||
from frigate.detectors import DetectorTypeEnum
|
||||
from frigate.util import deep_merge, load_config_with_no_duplicates
|
||||
|
||||
|
||||
class TestConfig(unittest.TestCase):
|
||||
@ -37,6 +37,50 @@ class TestConfig(unittest.TestCase):
|
||||
runtime_config = frigate_config.runtime_config
|
||||
assert "cpu" in runtime_config.detectors.keys()
|
||||
assert runtime_config.detectors["cpu"].type == DetectorTypeEnum.cpu
|
||||
assert runtime_config.detectors["cpu"].model.width == 320
|
||||
|
||||
def test_detector_custom_model_path(self):
|
||||
config = {
|
||||
"detectors": {
|
||||
"cpu": {
|
||||
"type": "cpu",
|
||||
"model": {"path": "/cpu_model.tflite"},
|
||||
},
|
||||
"edgetpu": {
|
||||
"type": "edgetpu",
|
||||
"model": {"path": "/edgetpu_model.tflite", "width": 160},
|
||||
},
|
||||
"openvino": {
|
||||
"type": "openvino",
|
||||
},
|
||||
},
|
||||
"model": {"path": "/default.tflite", "width": 512},
|
||||
}
|
||||
|
||||
frigate_config = FrigateConfig(**(deep_merge(config, self.minimal)))
|
||||
runtime_config = frigate_config.runtime_config
|
||||
|
||||
assert "cpu" in runtime_config.detectors.keys()
|
||||
assert "edgetpu" in runtime_config.detectors.keys()
|
||||
assert "openvino" in runtime_config.detectors.keys()
|
||||
|
||||
assert runtime_config.detectors["cpu"].type == DetectorTypeEnum.cpu
|
||||
assert runtime_config.detectors["edgetpu"].type == DetectorTypeEnum.edgetpu
|
||||
assert runtime_config.detectors["openvino"].type == DetectorTypeEnum.openvino
|
||||
|
||||
assert runtime_config.detectors["cpu"].num_threads == 3
|
||||
assert runtime_config.detectors["edgetpu"].device is None
|
||||
assert runtime_config.detectors["openvino"].device is None
|
||||
|
||||
assert runtime_config.model.path == "/default.tflite"
|
||||
assert runtime_config.detectors["cpu"].model.path == "/cpu_model.tflite"
|
||||
assert runtime_config.detectors["edgetpu"].model.path == "/edgetpu_model.tflite"
|
||||
assert runtime_config.detectors["openvino"].model.path == "/default.tflite"
|
||||
|
||||
assert runtime_config.model.width == 512
|
||||
assert runtime_config.detectors["cpu"].model.width == 512
|
||||
assert runtime_config.detectors["edgetpu"].model.width == 160
|
||||
assert runtime_config.detectors["openvino"].model.width == 512
|
||||
|
||||
def test_invalid_mqtt_config(self):
|
||||
config = {
|
||||
|
@ -1,53 +1,49 @@
|
||||
import unittest
|
||||
from unittest.mock import patch
|
||||
from unittest.mock import Mock, patch
|
||||
|
||||
import numpy as np
|
||||
from frigate.config import DetectorTypeEnum, InputTensorEnum, ModelConfig
|
||||
from pydantic import parse_obj_as
|
||||
|
||||
from frigate.config import DetectorConfig, InputTensorEnum, ModelConfig
|
||||
from frigate.detectors import DetectorTypeEnum
|
||||
import frigate.detectors as detectors
|
||||
import frigate.object_detection
|
||||
|
||||
|
||||
class TestLocalObjectDetector(unittest.TestCase):
|
||||
@patch("frigate.object_detection.EdgeTpuTfl")
|
||||
@patch("frigate.object_detection.CpuTfl")
|
||||
def test_localdetectorprocess_given_type_cpu_should_call_cputfl_init(
|
||||
self, mock_cputfl, mock_edgetputfl
|
||||
):
|
||||
test_cfg = ModelConfig()
|
||||
test_cfg.path = "/test/modelpath"
|
||||
test_obj = frigate.object_detection.LocalObjectDetector(
|
||||
det_type=DetectorTypeEnum.cpu, model_config=test_cfg, num_threads=6
|
||||
)
|
||||
def test_localdetectorprocess_should_only_create_specified_detector_type(self):
|
||||
for det_type in detectors.api_types:
|
||||
with self.subTest(det_type=det_type):
|
||||
with patch.dict(
|
||||
"frigate.detectors.api_types",
|
||||
{det_type: Mock() for det_type in DetectorTypeEnum},
|
||||
):
|
||||
test_cfg = parse_obj_as(
|
||||
DetectorConfig, ({"type": det_type, "model": {}})
|
||||
)
|
||||
test_cfg.model.path = "/test/modelpath"
|
||||
test_obj = frigate.object_detection.LocalObjectDetector(
|
||||
detector_config=test_cfg
|
||||
)
|
||||
|
||||
assert test_obj is not None
|
||||
mock_edgetputfl.assert_not_called()
|
||||
mock_cputfl.assert_called_once_with(model_config=test_cfg, num_threads=6)
|
||||
assert test_obj is not None
|
||||
for api_key, mock_detector in detectors.api_types.items():
|
||||
if test_cfg.type == api_key:
|
||||
mock_detector.assert_called_once_with(test_cfg)
|
||||
else:
|
||||
mock_detector.assert_not_called()
|
||||
|
||||
@patch("frigate.object_detection.EdgeTpuTfl")
|
||||
@patch("frigate.object_detection.CpuTfl")
|
||||
def test_localdetectorprocess_given_type_edgtpu_should_call_edgtpu_init(
|
||||
self, mock_cputfl, mock_edgetputfl
|
||||
):
|
||||
test_cfg = ModelConfig()
|
||||
test_cfg.path = "/test/modelpath"
|
||||
@patch.dict(
|
||||
"frigate.detectors.api_types",
|
||||
{det_type: Mock() for det_type in DetectorTypeEnum},
|
||||
)
|
||||
def test_detect_raw_given_tensor_input_should_return_api_detect_raw_result(self):
|
||||
mock_cputfl = detectors.api_types[DetectorTypeEnum.cpu]
|
||||
|
||||
test_obj = frigate.object_detection.LocalObjectDetector(
|
||||
det_type=DetectorTypeEnum.edgetpu,
|
||||
det_device="usb",
|
||||
model_config=test_cfg,
|
||||
)
|
||||
|
||||
assert test_obj is not None
|
||||
mock_cputfl.assert_not_called()
|
||||
mock_edgetputfl.assert_called_once_with(det_device="usb", model_config=test_cfg)
|
||||
|
||||
@patch("frigate.object_detection.CpuTfl")
|
||||
def test_detect_raw_given_tensor_input_should_return_api_detect_raw_result(
|
||||
self, mock_cputfl
|
||||
):
|
||||
TEST_DATA = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
|
||||
TEST_DETECT_RESULT = np.ndarray([1, 2, 4, 8, 16, 32])
|
||||
test_obj_detect = frigate.object_detection.LocalObjectDetector(
|
||||
det_device=DetectorTypeEnum.cpu
|
||||
detector_config=parse_obj_as(DetectorConfig, {"type": "cpu", "model": {}})
|
||||
)
|
||||
|
||||
mock_det_api = mock_cputfl.return_value
|
||||
@ -58,18 +54,23 @@ class TestLocalObjectDetector(unittest.TestCase):
|
||||
mock_det_api.detect_raw.assert_called_once_with(tensor_input=TEST_DATA)
|
||||
assert test_result is mock_det_api.detect_raw.return_value
|
||||
|
||||
@patch("frigate.object_detection.CpuTfl")
|
||||
@patch.dict(
|
||||
"frigate.detectors.api_types",
|
||||
{det_type: Mock() for det_type in DetectorTypeEnum},
|
||||
)
|
||||
def test_detect_raw_given_tensor_input_should_call_api_detect_raw_with_transposed_tensor(
|
||||
self, mock_cputfl
|
||||
self,
|
||||
):
|
||||
mock_cputfl = detectors.api_types[DetectorTypeEnum.cpu]
|
||||
|
||||
TEST_DATA = np.zeros((1, 32, 32, 3), np.uint8)
|
||||
TEST_DETECT_RESULT = np.ndarray([1, 2, 4, 8, 16, 32])
|
||||
|
||||
test_cfg = ModelConfig()
|
||||
test_cfg.input_tensor = InputTensorEnum.nchw
|
||||
test_cfg = parse_obj_as(DetectorConfig, {"type": "cpu", "model": {}})
|
||||
test_cfg.model.input_tensor = InputTensorEnum.nchw
|
||||
|
||||
test_obj_detect = frigate.object_detection.LocalObjectDetector(
|
||||
det_device=DetectorTypeEnum.cpu, model_config=test_cfg
|
||||
detector_config=test_cfg
|
||||
)
|
||||
|
||||
mock_det_api = mock_cputfl.return_value
|
||||
@ -85,11 +86,16 @@ class TestLocalObjectDetector(unittest.TestCase):
|
||||
|
||||
assert test_result is mock_det_api.detect_raw.return_value
|
||||
|
||||
@patch("frigate.object_detection.CpuTfl")
|
||||
@patch.dict(
|
||||
"frigate.detectors.api_types",
|
||||
{det_type: Mock() for det_type in DetectorTypeEnum},
|
||||
)
|
||||
@patch("frigate.object_detection.load_labels")
|
||||
def test_detect_given_tensor_input_should_return_lfiltered_detections(
|
||||
self, mock_load_labels, mock_cputfl
|
||||
self, mock_load_labels
|
||||
):
|
||||
mock_cputfl = detectors.api_types[DetectorTypeEnum.cpu]
|
||||
|
||||
TEST_DATA = np.zeros((1, 32, 32, 3), np.uint8)
|
||||
TEST_DETECT_RAW = [
|
||||
[2, 0.9, 5, 4, 3, 2],
|
||||
@ -109,9 +115,10 @@ class TestLocalObjectDetector(unittest.TestCase):
|
||||
"label-5",
|
||||
]
|
||||
|
||||
test_cfg = parse_obj_as(DetectorConfig, {"type": "cpu", "model": {}})
|
||||
test_cfg.model = ModelConfig()
|
||||
test_obj_detect = frigate.object_detection.LocalObjectDetector(
|
||||
det_device=DetectorTypeEnum.cpu,
|
||||
model_config=ModelConfig(),
|
||||
detector_config=test_cfg,
|
||||
labels=TEST_LABEL_FILE,
|
||||
)
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user