mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-07-30 13:48:07 +02:00
Moved DeGirum section to 'Community' detectors, fixed formatting of headers to be more consistent with the rest of the page, and removed uneeded 'models' folder
This commit is contained in:
parent
e78cb01a65
commit
cccab4c7a9
@ -141,11 +141,9 @@ See the [installation docs](../frigate/installation.md#hailo-8l) for information
|
||||
|
||||
### Configuration
|
||||
|
||||
When configuring the Hailo detector, you have two options to specify the model: a local **path** or a **URL**.
|
||||
When configuring the Hailo detector, you have two options to specify the model: a local **path** or a **URL**.
|
||||
If both are provided, the detector will first check for the model at the given local path. If the file is not found, it will download the model from the specified URL. The model file is cached under `/config/model_cache/hailo`.
|
||||
|
||||
#### YOLO
|
||||
#### YOLO
|
||||
|
||||
Use this configuration for YOLO-based models. When no custom model path or URL is provided, the detector automatically downloads the default model based on the detected hardware:
|
||||
@ -240,93 +238,6 @@ Hailo8 supports all models in the Hailo Model Zoo that include HailoRT post-proc
|
||||
|
||||
---
|
||||
|
||||
|
||||
|
||||
## DeGirum
|
||||
DeGirum is a detector that can use any type of hardware listed on [their website](https://hub.degirum.com). DeGirum can be used with local hardware through a DeGirum AI Server, or through the use of `@local`. You can also connect directly to DeGirum's AI Hub to run inferences.
|
||||
|
||||
### Configuration
|
||||
#### AI Server Inference
|
||||
Before starting with the config file for this section, you must first launch an AI server. DeGirum has an AI server ready to use as a docker container. Add this to your `docker-compose.yml` to get started:
|
||||
```yaml
|
||||
degirum_detector:
|
||||
container_name: degirum
|
||||
image: degirum/aiserver:latest
|
||||
privileged: true
|
||||
ports:
|
||||
- "8778:8778"
|
||||
```
|
||||
All supported hardware will automatically be found on your AI server host as long as relevant runtimes and drivers are properly installed on your machine. Refer to [DeGirum's docs site](https://docs.degirum.com/pysdk/runtimes-and-drivers) if you have any trouble.
|
||||
|
||||
Once completed, changing the `config.yml` file is simple.
|
||||
```yaml
|
||||
degirum_detector:
|
||||
type: degirum
|
||||
location: degirum # Set to service name (degirum_detector), container_name (degirum), or a host:port (192.168.29.4:8778)
|
||||
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start. If you aren't pulling a model from the AI Hub, leave this and 'token' blank.
|
||||
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the [AI Hub](https://hub.degirum.com). This can be left blank if you're pulling a model from the public zoo and running inferences on your local hardware using @local or a local DeGirum AI Server
|
||||
```
|
||||
Setting up a model in the `config.yml` is similar to setting up an AI server.
|
||||
You can set it to:
|
||||
- A model listed on the [AI Hub](https://hub.degirum.com), given that the correct zoo name is listed in your detector
|
||||
- If this is what you choose to do, the correct model will be downloaded onto your machine before running.
|
||||
- A local directory acting as a zoo. See DeGirum's docs site [for more information](https://docs.degirum.com/pysdk/user-guide-pysdk/organizing-models#model-zoo-directory-structure).
|
||||
- A path to some model.json.
|
||||
```yaml
|
||||
model:
|
||||
path: ./mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1 # directory to model .json and file
|
||||
width: 300 # width is in the model name as the first number in the "int"x"int" section
|
||||
height: 300 # height is in the model name as the second number in the "int"x"int" section
|
||||
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
|
||||
```
|
||||
|
||||
|
||||
#### Local Inference
|
||||
It is also possible to eliminate the need for an AI server and run the hardware directly. The benefit of this approach is that you eliminate any bottlenecks that occur when transferring prediction results from the AI server docker container to the frigate one. However, the method of implementing local inference is different for every device and hardware combination, so it's usually more trouble than it's worth. A general guideline to achieve this would be:
|
||||
1. Ensuring that the frigate docker container has the runtime you want to use. So for instance, running `@local` for Hailo means making sure the container you're using has the Hailo runtime installed.
|
||||
2. To double check the runtime is detected by the DeGirum detector, make sure the `degirum sys-info` command properly shows whatever runtimes you mean to install.
|
||||
3. Create a DeGirum detector in your `config.yml` file.
|
||||
```yaml
|
||||
degirum_detector:
|
||||
type: degirum
|
||||
location: "@local" # For accessing AI Hub devices and models
|
||||
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start.
|
||||
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the [AI Hub](https://hub.degirum.com). This can be left blank if you're pulling a model from the public zoo and running inferences on your local hardware using @local or a local DeGirum AI Server
|
||||
|
||||
```
|
||||
Once `degirum_detector` is setup, you can choose a model through 'model' section in the `config.yml` file.
|
||||
```yaml
|
||||
model:
|
||||
path: mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1
|
||||
width: 300 # width is in the model name as the first number in the "int"x"int" section
|
||||
height: 300 # height is in the model name as the second number in the "int"x"int" section
|
||||
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
|
||||
```
|
||||
|
||||
|
||||
#### AI Hub Cloud Inference
|
||||
If you do not possess whatever hardware you want to run, there's also the option to run cloud inferences. Do note that your detection fps might need to be lowered as network latency does significantly slow down this method of detection. For use with Frigate, we highly recommend using a local AI server as described above. To set up cloud inferences,
|
||||
1. Sign up at [DeGirum's AI Hub](https://hub.degirum.com).
|
||||
2. Get an access token.
|
||||
3. Create a DeGirum detector in your `config.yml` file.
|
||||
```yaml
|
||||
degirum_detector:
|
||||
type: degirum
|
||||
location: "@cloud" # For accessing AI Hub devices and models
|
||||
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start.
|
||||
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the (AI Hub)[https://hub.degirum.com).
|
||||
|
||||
```
|
||||
Once `degirum_detector` is setup, you can choose a model through 'model' section in the `config.yml` file.
|
||||
```yaml
|
||||
model:
|
||||
path: mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1
|
||||
width: 300 # width is in the model name as the first number in the "int"x"int" section
|
||||
height: 300 # height is in the model name as the second number in the "int"x"int" section
|
||||
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
|
||||
```
|
||||
|
||||
|
||||
## OpenVINO Detector
|
||||
|
||||
The OpenVINO detector type runs an OpenVINO IR model on AMD and Intel CPUs, Intel GPUs and Intel VPU hardware. To configure an OpenVINO detector, set the `"type"` attribute to `"openvino"`.
|
||||
@ -1162,3 +1073,98 @@ wget -O yolov9-t.pt "https://github.com/WongKinYiu/yolov9/releases/download/v0.1
|
||||
sed -i "s/ckpt = torch.load(attempt_download(w), map_location='cpu')/ckpt = torch.load(attempt_download(w), map_location='cpu', weights_only=False)/g" ./models/experimental.py
|
||||
bin/python3 export.py --weights ./yolov9-t.pt --imgsz 320 --simplify --include onnx
|
||||
```
|
||||
|
||||
## DeGirum
|
||||
|
||||
DeGirum is a detector that can use any type of hardware listed on [their website](https://hub.degirum.com). DeGirum can be used with local hardware through a DeGirum AI Server, or through the use of `@local`. You can also connect directly to DeGirum's AI Hub to run inferences. **Please Note:** This detector *cannot* be used for commercial purposes.
|
||||
|
||||
### Configuration
|
||||
|
||||
#### AI Server Inference
|
||||
|
||||
Before starting with the config file for this section, you must first launch an AI server. DeGirum has an AI server ready to use as a docker container. Add this to your `docker-compose.yml` to get started:
|
||||
```yaml
|
||||
degirum_detector:
|
||||
container_name: degirum
|
||||
image: degirum/aiserver:latest
|
||||
privileged: true
|
||||
ports:
|
||||
- "8778:8778"
|
||||
```
|
||||
All supported hardware will automatically be found on your AI server host as long as relevant runtimes and drivers are properly installed on your machine. Refer to [DeGirum's docs site](https://docs.degirum.com/pysdk/runtimes-and-drivers) if you have any trouble.
|
||||
|
||||
Once completed, changing the `config.yml` file is simple.
|
||||
```yaml
|
||||
degirum_detector:
|
||||
type: degirum
|
||||
location: degirum # Set to service name (degirum_detector), container_name (degirum), or a host:port (192.168.29.4:8778)
|
||||
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start. If you aren't pulling a model from the AI Hub, leave this and 'token' blank.
|
||||
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the [AI Hub](https://hub.degirum.com). This can be left blank if you're pulling a model from the public zoo and running inferences on your local hardware using @local or a local DeGirum AI Server
|
||||
```
|
||||
Setting up a model in the `config.yml` is similar to setting up an AI server.
|
||||
You can set it to:
|
||||
- A model listed on the [AI Hub](https://hub.degirum.com), given that the correct zoo name is listed in your detector
|
||||
- If this is what you choose to do, the correct model will be downloaded onto your machine before running.
|
||||
- A local directory acting as a zoo. See DeGirum's docs site [for more information](https://docs.degirum.com/pysdk/user-guide-pysdk/organizing-models#model-zoo-directory-structure).
|
||||
- A path to some model.json.
|
||||
```yaml
|
||||
model:
|
||||
path: ./mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1 # directory to model .json and file
|
||||
width: 300 # width is in the model name as the first number in the "int"x"int" section
|
||||
height: 300 # height is in the model name as the second number in the "int"x"int" section
|
||||
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
|
||||
```
|
||||
|
||||
|
||||
#### Local Inference
|
||||
|
||||
It is also possible to eliminate the need for an AI server and run the hardware directly. The benefit of this approach is that you eliminate any bottlenecks that occur when transferring prediction results from the AI server docker container to the frigate one. However, the method of implementing local inference is different for every device and hardware combination, so it's usually more trouble than it's worth. A general guideline to achieve this would be:
|
||||
1. Ensuring that the frigate docker container has the runtime you want to use. So for instance, running `@local` for Hailo means making sure the container you're using has the Hailo runtime installed.
|
||||
2. To double check the runtime is detected by the DeGirum detector, make sure the `degirum sys-info` command properly shows whatever runtimes you mean to install.
|
||||
3. Create a DeGirum detector in your `config.yml` file.
|
||||
|
||||
```yaml
|
||||
degirum_detector:
|
||||
type: degirum
|
||||
location: "@local" # For accessing AI Hub devices and models
|
||||
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start.
|
||||
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the [AI Hub](https://hub.degirum.com). This can be left blank if you're pulling a model from the public zoo and running inferences on your local hardware using @local or a local DeGirum AI Server
|
||||
|
||||
```
|
||||
|
||||
Once `degirum_detector` is setup, you can choose a model through 'model' section in the `config.yml` file.
|
||||
|
||||
```yaml
|
||||
model:
|
||||
path: mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1
|
||||
width: 300 # width is in the model name as the first number in the "int"x"int" section
|
||||
height: 300 # height is in the model name as the second number in the "int"x"int" section
|
||||
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
|
||||
```
|
||||
|
||||
|
||||
#### AI Hub Cloud Inference
|
||||
|
||||
If you do not possess whatever hardware you want to run, there's also the option to run cloud inferences. Do note that your detection fps might need to be lowered as network latency does significantly slow down this method of detection. For use with Frigate, we highly recommend using a local AI server as described above. To set up cloud inferences,
|
||||
1. Sign up at [DeGirum's AI Hub](https://hub.degirum.com).
|
||||
2. Get an access token.
|
||||
3. Create a DeGirum detector in your `config.yml` file.
|
||||
|
||||
```yaml
|
||||
degirum_detector:
|
||||
type: degirum
|
||||
location: "@cloud" # For accessing AI Hub devices and models
|
||||
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start.
|
||||
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the (AI Hub)[https://hub.degirum.com).
|
||||
|
||||
```
|
||||
|
||||
Once `degirum_detector` is setup, you can choose a model through 'model' section in the `config.yml` file.
|
||||
|
||||
```yaml
|
||||
model:
|
||||
path: mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1
|
||||
width: 300 # width is in the model name as the first number in the "int"x"int" section
|
||||
height: 300 # height is in the model name as the second number in the "int"x"int" section
|
||||
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
|
||||
```
|
@ -1,95 +0,0 @@
|
||||
{
|
||||
"0": "__background__",
|
||||
"1": "person",
|
||||
"2": "bicycle",
|
||||
"3": "car",
|
||||
"4": "motorcycle",
|
||||
"5": "airplan",
|
||||
"6": "bus",
|
||||
"7": "train",
|
||||
"8": "car",
|
||||
"9": "boat",
|
||||
"10": "traffic light",
|
||||
"11": "fire hydrant",
|
||||
"12": "street sign",
|
||||
"13": "stop sign",
|
||||
"14": "parking meter",
|
||||
"15": "bench",
|
||||
"16": "bird",
|
||||
"17": "cat",
|
||||
"18": "dog",
|
||||
"19": "horse",
|
||||
"20": "sheep",
|
||||
"21": "cow",
|
||||
"22": "elephant",
|
||||
"23": "bear",
|
||||
"24": "zebra",
|
||||
"25": "giraffe",
|
||||
"26": "hat",
|
||||
"27": "backpack",
|
||||
"28": "umbrella",
|
||||
"29": "shoe",
|
||||
"30": "eye glasses",
|
||||
"31": "handbag",
|
||||
"32": "tie",
|
||||
"33": "suitcase",
|
||||
"34": "frisbee",
|
||||
"35": "skis",
|
||||
"36": "snowboard",
|
||||
"37": "sports ball",
|
||||
"38": "kite",
|
||||
"39": "baseball bat",
|
||||
"40": "baseball glove",
|
||||
"41": "skateboard",
|
||||
"42": "surfboard",
|
||||
"43": "tennis racket",
|
||||
"44": "bottle",
|
||||
"45": "plate",
|
||||
"46": "wine glass",
|
||||
"47": "cup",
|
||||
"48": "fork",
|
||||
"49": "knife",
|
||||
"50": "spoon",
|
||||
"51": "bowl",
|
||||
"52": "banana",
|
||||
"53": "apple",
|
||||
"54": "sandwich",
|
||||
"55": "orange",
|
||||
"56": "broccoli",
|
||||
"57": "carrot",
|
||||
"58": "hot dog",
|
||||
"59": "pizza",
|
||||
"60": "donut",
|
||||
"61": "cake",
|
||||
"62": "chair",
|
||||
"63": "couch",
|
||||
"64": "potted plant",
|
||||
"65": "bed",
|
||||
"66": "mirror",
|
||||
"67": "dining table",
|
||||
"68": "window",
|
||||
"69": "desk",
|
||||
"70": "toilet",
|
||||
"71": "door",
|
||||
"72": "tv",
|
||||
"73": "laptop",
|
||||
"74": "mouse",
|
||||
"75": "remote",
|
||||
"76": "keyboard",
|
||||
"77": "cell phone",
|
||||
"78": "microwave",
|
||||
"79": "oven",
|
||||
"80": "toaster",
|
||||
"81": "sink",
|
||||
"82": "refrigerator",
|
||||
"83": "blender",
|
||||
"84": "book",
|
||||
"85": "clock",
|
||||
"86": "vase",
|
||||
"87": "scissors",
|
||||
"88": "teddy bear",
|
||||
"89": "hair drier",
|
||||
"90": "toothbrush",
|
||||
"91": "hair brush"
|
||||
}
|
||||
|
Binary file not shown.
@ -1,53 +0,0 @@
|
||||
{
|
||||
"ConfigVersion": 6,
|
||||
"Checksum": "0ebce8b115214756bd37cfb5b4c3b547d557c6c58e828a8b9f725214afe49600",
|
||||
"DEVICE": [
|
||||
{
|
||||
"RuntimeAgent": "OPENVINO",
|
||||
"DeviceType": "CPU",
|
||||
"SupportedDeviceTypes": "OPENVINO/CPU"
|
||||
}
|
||||
],
|
||||
"MODEL_PARAMETERS": [
|
||||
{
|
||||
"ModelPath": "ssdlite_mobilenet_v2.xml"
|
||||
}
|
||||
],
|
||||
"PRE_PROCESS": [
|
||||
{
|
||||
"InputImgFmt": "JPEG",
|
||||
"InputImgNormEn": false,
|
||||
"InputN": 1,
|
||||
"InputType": "Image",
|
||||
"InputResizeMethod": "bilinear",
|
||||
"InputPadMethod": "letterbox",
|
||||
"ImageBackend": "auto",
|
||||
"InputH": 300,
|
||||
"InputW": 300,
|
||||
"InputC": 3,
|
||||
"InputQuantEn": true,
|
||||
"InputQuantOffset": 0,
|
||||
"InputQuantScale": 1,
|
||||
"InputTensorLayout": "NCHW",
|
||||
"InputImgSliceType": "None"
|
||||
}
|
||||
],
|
||||
"POST_PROCESS": [
|
||||
{
|
||||
"PostProcessorInputs": [3, 1, 2],
|
||||
"OutputPostprocessType": "Detection",
|
||||
"LabelsPath": "labels.json",
|
||||
"OutputConfThreshold": 0.3,
|
||||
"MaxDetections": 20,
|
||||
"OutputNMSThreshold": 0.6,
|
||||
"MaxDetectionsPerClass": 100,
|
||||
"MaxClassesPerDetection": 1,
|
||||
"UseRegularNMS": false,
|
||||
"OutputNumClasses": 90,
|
||||
"XScale": 10,
|
||||
"YScale": 10,
|
||||
"HScale": 5,
|
||||
"WScale": 5
|
||||
}
|
||||
]
|
||||
}
|
File diff suppressed because it is too large
Load Diff
Loading…
Reference in New Issue
Block a user