mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-08-31 13:48:19 +02:00
Added degirum plugin, updated documentation for degirum detector usage, updated requirements with degirum_headless
This commit is contained in:
parent
19342c8768
commit
e8c88b4386
@ -71,3 +71,5 @@ prometheus-client == 0.21.*
|
||||
# TFLite
|
||||
tflite_runtime @ https://github.com/frigate-nvr/TFlite-builds/releases/download/v2.17.1/tflite_runtime-2.17.1-cp311-cp311-linux_x86_64.whl; platform_machine == 'x86_64'
|
||||
tflite_runtime @ https://github.com/feranick/TFlite-builds/releases/download/v2.17.1/tflite_runtime-2.17.1-cp311-cp311-linux_aarch64.whl; platform_machine == 'aarch64'
|
||||
# DeGirum detector
|
||||
degirum_headless == 0.15.*
|
||||
|
@ -13,6 +13,7 @@ Frigate supports multiple different detectors that work on different types of ha
|
||||
|
||||
- [Coral EdgeTPU](#edge-tpu-detector): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
|
||||
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
|
||||
- [DeGirum](#degirum): Service for using hardware devices in the cloud or locally. Hardware and models provided on the cloud on [their website](https://hub.degirum.com).
|
||||
|
||||
**AMD**
|
||||
|
||||
@ -140,10 +141,10 @@ See the [installation docs](../frigate/installation.md#hailo-8l) for information
|
||||
|
||||
### Configuration
|
||||
|
||||
When configuring the Hailo detector, you have two options to specify the model: a local **path** or a **URL**.
|
||||
When configuring the Hailo detector, you have two options to specify the model: a local **path** or a **URL**.
|
||||
If both are provided, the detector will first check for the model at the given local path. If the file is not found, it will download the model from the specified URL. The model file is cached under `/config/model_cache/hailo`.
|
||||
|
||||
#### YOLO
|
||||
#### YOLO
|
||||
|
||||
Use this configuration for YOLO-based models. When no custom model path or URL is provided, the detector automatically downloads the default model based on the detected hardware:
|
||||
- **Hailo-8 hardware:** Uses **YOLOv6n** (default: `yolov6n.hef`)
|
||||
@ -226,15 +227,74 @@ model:
|
||||
```
|
||||
For additional ready-to-use models, please visit: https://github.com/hailo-ai/hailo_model_zoo
|
||||
|
||||
Hailo8 supports all models in the Hailo Model Zoo that include HailoRT post-processing. You're welcome to choose any of these pre-configured models for your implementation.
|
||||
Hailo8 supports all models in the Hailo Model Zoo that include HailoRT post-processing. You're welcome to choose any of these pre-configured models for your implementation.
|
||||
|
||||
> **Note:**
|
||||
> **Note:**
|
||||
> The config.path parameter can accept either a local file path or a URL ending with .hef. When provided, the detector will first check if the path is a local file path. If the file exists locally, it will use it directly. If the file is not found locally or if a URL was provided, it will attempt to download the model from the specified URL.
|
||||
|
||||
---
|
||||
|
||||
|
||||
|
||||
## DeGirum
|
||||
DeGirum is a detector that can use any type of hardware listed on [their website](https://hub.degirum.com). You can connect directly to DeGirum's cloud platform to run inference with just an internet connection after signing up, or use DeGirum with local hardware through a [AI server](#ai-server). You can view their official docs site page for their cloud platform [here](https://docs.degirum.com/ai-hub/quickstart).
|
||||
|
||||
### Configuration
|
||||
#### AI Hub Cloud Inference
|
||||
DeGirum is designed to support very easy cloud inference. To set it up, you need to:
|
||||
1. Sign up at [DeGirum's AI Hub](hub.degirum.com).
|
||||
2. Get an access token.
|
||||
3. Create a DeGirum detector in your config.yml file.
|
||||
```yaml
|
||||
degirum_detector:
|
||||
type: degirum
|
||||
location: "@cloud" # For accessing AI Hub devices and models
|
||||
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "team_name/zoo_name". DeGirum/public is available to everyone, so feel free to use it if you don't know where to start.
|
||||
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the (AI Hub)[https://hub.degirum.com).
|
||||
|
||||
```
|
||||
Once `degirum_detector` is setup, you can choose a model through 'model' section in the config.yml file.
|
||||
```yaml
|
||||
model:
|
||||
path: mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1
|
||||
width: 300 # width is in the model name as the first number in the "int"x"int" section
|
||||
height: 300 # height is in the model name as the second number in the "int"x"int" section
|
||||
```
|
||||
|
||||
#### AI Server Inference
|
||||
Before starting with the config file for this section, you must first launch an AI server. DeGirum has an AI server ready to use as a docker container. Add this to your docker-compose.yml to get started:
|
||||
```yaml
|
||||
degirum_detector:
|
||||
container_name: degirum
|
||||
image: degirum/aiserver:latest
|
||||
privileged: true
|
||||
ports:
|
||||
- "8778:8778"
|
||||
```
|
||||
All supported hardware will automatically be found on your AI server host as long as relevant runtimes and drivers are properly installed on your machine. Refer to [DeGirum's docs site](https://docs.degirum.com/pysdk/runtimes-and-drivers) if you have any trouble.
|
||||
Once completed, changing the config.yml file is much the same as the process for cloud.
|
||||
```yaml
|
||||
degirum_detector:
|
||||
type: degirum
|
||||
location: degirum # Set to service name (degirum_detector), container_name (degirum), or a host:port (192.168.29.4:8778)
|
||||
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "team_name/zoo_name". DeGirum/public is available to everyone, so feel free to use it if you don't know where to start. If you aren't pulling a model from the AI Hub, leave this and 'token' blank.
|
||||
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the AI Hub (https://hub.degirum.com). Leave blank if you aren't going to pull a model from the AI Hub.
|
||||
```
|
||||
Setting up a model in the .yml is similar to setting up an AI server.
|
||||
You can set it to:
|
||||
- A model listed on the [AI Hub](https://hub.degirum.com), given that the correct zoo name is listed in your detector
|
||||
- If this is what you choose to do, the correct model will be downloaded onto your machine before running.
|
||||
- A local directory acting as a zoo. See DeGirum's docs site [for more information](https://docs.degirum.com/pysdk/user-guide-pysdk/organizing-models#model-zoo-directory-structure).
|
||||
- A path to some model.json.
|
||||
```yaml
|
||||
model:
|
||||
path: ./mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1 # directory to model .json and file
|
||||
width: 300 # width is in the model name as the first number in the "int"x"int" section
|
||||
height: 300 # height is in the model name as the second number in the "int"x"int" section
|
||||
```
|
||||
|
||||
|
||||
|
||||
## OpenVINO Detector
|
||||
|
||||
The OpenVINO detector type runs an OpenVINO IR model on AMD and Intel CPUs, Intel GPUs and Intel VPU hardware. To configure an OpenVINO detector, set the `"type"` attribute to `"openvino"`.
|
||||
|
118
frigate/detectors/plugins/degirum.py
Normal file
118
frigate/detectors/plugins/degirum.py
Normal file
@ -0,0 +1,118 @@
|
||||
import logging
|
||||
import queue
|
||||
|
||||
import degirum as dg
|
||||
import numpy as np
|
||||
from pydantic import Field
|
||||
from typing_extensions import Literal
|
||||
|
||||
from frigate.detectors.detection_api import DetectionApi
|
||||
from frigate.detectors.detector_config import BaseDetectorConfig
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
DETECTOR_KEY = "degirum"
|
||||
|
||||
|
||||
### STREAM CLASS FROM DG TOOLS ###
|
||||
class Stream(queue.Queue):
|
||||
"""Queue-based iterable class with optional item drop"""
|
||||
|
||||
# minimum queue size to avoid deadlocks:
|
||||
# one for stray result, one for poison pill in request_stop(),
|
||||
# and one for poison pill gizmo_run()
|
||||
min_queue_size = 1
|
||||
|
||||
def __init__(self, maxsize=0, allow_drop: bool = False):
|
||||
"""Constructor
|
||||
|
||||
- maxsize: maximum stream depth; 0 for unlimited depth
|
||||
- allow_drop: allow dropping elements on put() when stream is full
|
||||
"""
|
||||
|
||||
if maxsize < self.min_queue_size and maxsize != 0:
|
||||
raise Exception(
|
||||
f"Incorrect stream depth: {maxsize}. Should be 0 (unlimited) or at least {self.min_queue_size}"
|
||||
)
|
||||
|
||||
super().__init__(maxsize)
|
||||
self.allow_drop = allow_drop
|
||||
self.dropped_cnt = 0 # number of dropped items
|
||||
|
||||
_poison = None
|
||||
|
||||
def put(self, item, block: bool = True, timeout=None) -> None:
|
||||
"""Put an item into the stream
|
||||
|
||||
- item: item to put
|
||||
If there is no space left, and allow_drop flag is set, then oldest item will
|
||||
be popped to free space
|
||||
"""
|
||||
if self.allow_drop:
|
||||
while True:
|
||||
try:
|
||||
super().put(item, False)
|
||||
break
|
||||
except queue.Full:
|
||||
self.dropped_cnt += 1
|
||||
try:
|
||||
self.get_nowait()
|
||||
finally:
|
||||
pass
|
||||
else:
|
||||
super().put(item, block, timeout)
|
||||
|
||||
def __iter__(self):
|
||||
"""Iterator method"""
|
||||
return iter(self.get, self._poison)
|
||||
|
||||
def close(self):
|
||||
"""Close stream: put poison pill"""
|
||||
self.put(self._poison)
|
||||
|
||||
|
||||
### DETECTOR CONFIG ###
|
||||
class DGDetectorConfig(BaseDetectorConfig):
|
||||
type: Literal[DETECTOR_KEY]
|
||||
location: str = Field(default=None, title="Inference Location")
|
||||
zoo: str = Field(default=None, title="Model Zoo")
|
||||
token: str = Field(default=None, title="DeGirum Cloud Token")
|
||||
|
||||
|
||||
### ACTUAL DETECTOR ###
|
||||
class DGDetector(DetectionApi):
|
||||
type_key = DETECTOR_KEY
|
||||
|
||||
def __init__(self, detector_config: DGDetectorConfig):
|
||||
self._queue = Stream(5, allow_drop=True)
|
||||
self._zoo = dg.connect(
|
||||
detector_config.location, detector_config.zoo, detector_config.token
|
||||
)
|
||||
self.dg_model = self._zoo.load_model(
|
||||
detector_config.model.path, non_blocking_batch_predict=True
|
||||
)
|
||||
self.model_height = detector_config.model.height
|
||||
self.model_width = detector_config.model.width
|
||||
self.predict_batch = self.dg_model.predict_batch(self._queue)
|
||||
|
||||
def detect_raw(self, tensor_input):
|
||||
# add tensor_input to input queue
|
||||
truncated_input = tensor_input.reshape(tensor_input.shape[1:])
|
||||
self._queue.put((truncated_input, ""))
|
||||
|
||||
# define empty detection result
|
||||
detections = np.zeros((20, 6), np.float32)
|
||||
res = next(self.predict_batch)
|
||||
if res is not None:
|
||||
# populate detection result with corresponding inference result information
|
||||
i = 0
|
||||
for result in res.results:
|
||||
detections[i] = [
|
||||
result["category_id"], # Label ID
|
||||
float(result["score"]), # Confidence
|
||||
result["bbox"][1] / self.model_height, # y_min
|
||||
result["bbox"][0] / self.model_width, # x_min
|
||||
result["bbox"][3] / self.model_height, # y_max
|
||||
result["bbox"][2] / self.model_width, # x_max
|
||||
]
|
||||
i += 1
|
||||
return detections
|
Loading…
Reference in New Issue
Block a user