blakeblackshear.frigate/frigate/detectors/plugins/onnx.py
harakas 44d8cdbba1
AMD GPU support with the rocm detector and YOLOv8 pretrained model download (#9762)
* ROCm AMD/GPU based build and detector, WIP

* detectors/rocm: separate yolov8 postprocessing into own function; fix box scaling; use cv2.dnn.blobForImage for preprocessing; assert on required model parameters

* AMD/ROCm: add couple of more ultralytics models; comments

* docker/rocm: make imported model files readable by all

* docker/rocm: readme about running on AMD GPUs

* docker/rocm: updated README

* docker/rocm: updated README

* docker/rocm: updated README

* detectors/rocm: separated preprocessing functions into yolo_utils.py

* detector/plugins: added onnx cpu plugin

* docker/rocm: updated container with limite label sets

* example detectors view

* docker/rocm: updated README.md

* docker/rocm: update README.md

* docker/rocm: do not set HSA_OVERRIDE_GFX_VERSION at all for the general version as the empty value broke rocm

* detectors: simplified/optimized yolov8_postprocess

* detector/yolo_utils: indentation, remove unused variable

* detectors/rocm: default option to conserve cpu usage at the expense of latency

* detectors/yolo_utils: use nms to prefilter overlapping boxes if too many detected

* detectors/edgetpu_tfl: add support for yolov8

* util/download_models: script to download yolov8 model files

* docker/main: add download-models overlay into s6 startup

* detectors/rocm: assume models are in /config/model_cache/yolov8/

* docker/rocm: compile onnx files into mxr files at startup

* switch model download into bash script

* detectors/rocm: automatically override HSA_OVERRIDE_GFX_VERSION for couple of known chipsets

* docs: rocm detector first notes

* typos

* describe builds (harakas temporary)

* docker/rocm: also build a version for gfx1100

* docker/rocm: use cp instead of tar

* docker.rocm: remove README as it is now in detector config

* frigate/detectors: renamed yolov8_preprocess->preprocess, pass input tensor element type

* docker/main: use newer openvino (2023.3.0)

* detectors: implement class aggregation

* update yolov8 model

* add openvino/yolov8 support for label aggregation

* docker: remove pointless s6/timeout-up files

* Revert "detectors: implement class aggregation"

This reverts commit dcfe6bbf6f.

* detectors/openvino: remove class aggregation

* detectors: increase yolov8 postprocessing score trershold to 0.5

* docker/rocm: separate rocm distributed files into its own build stage

* Update object_detectors.md

* updated CODEOWNERS file for rocm

* updated build names for documentation

* Revert "docker/main: use newer openvino (2023.3.0)"

This reverts commit dee95de908.

* reverrted openvino detector

* reverted edgetpu detector

* scratched rocm docs from any mention of edgetpu or openvino

* Update docs/docs/configuration/object_detectors.md

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* renamed frigate.detectors.yolo_utils.py -> frigate.detectors.util.py

* clarified rocm example performance

* Improved wording and clarified text

* Mentioned rocm detector for AMD GPUs

* applied ruff formating

* applied ruff suggested fixes

* docker/rocm: fix missing argument resulting in larger docker image sizes

* docs/configuration/object_detectors: fix links to yolov8 release files

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2024-02-10 06:41:46 -06:00

66 lines
2.3 KiB
Python

import glob
import logging
import numpy as np
from typing_extensions import Literal
from frigate.detectors.detection_api import DetectionApi
from frigate.detectors.detector_config import BaseDetectorConfig
from frigate.detectors.util import preprocess, yolov8_postprocess
logger = logging.getLogger(__name__)
DETECTOR_KEY = "onnx"
class ONNXDetectorConfig(BaseDetectorConfig):
type: Literal[DETECTOR_KEY]
class ONNXDetector(DetectionApi):
type_key = DETECTOR_KEY
def __init__(self, detector_config: ONNXDetectorConfig):
try:
import onnxruntime
logger.info("ONNX: loaded onnxruntime module")
except ModuleNotFoundError:
logger.error(
"ONNX: module loading failed, need 'pip install onnxruntime'?!?"
)
raise
assert (
detector_config.model.model_type == "yolov8"
), "ONNX: detector_config.model.model_type: only yolov8 supported"
assert (
detector_config.model.input_tensor == "nhwc"
), "ONNX: detector_config.model.input_tensor: only nhwc supported"
if detector_config.model.input_pixel_format != "rgb":
logger.warn(
"ONNX: detector_config.model.input_pixel_format: should be 'rgb' for yolov8, but '{detector_config.model.input_pixel_format}' specified!"
)
assert detector_config.model.path is not None, (
"ONNX: No model.path configured, please configure model.path and model.labelmap_path; some suggestions: "
+ ", ".join(glob.glob("/config/model_cache/yolov8/*.onnx"))
+ " and "
+ ", ".join(glob.glob("/config/model_cache/yolov8/*_labels.txt"))
)
path = detector_config.model.path
logger.info(f"ONNX: loading {detector_config.model.path}")
self.model = onnxruntime.InferenceSession(path)
logger.info(f"ONNX: {path} loaded")
def detect_raw(self, tensor_input):
model_input_name = self.model.get_inputs()[0].name
model_input_shape = self.model.get_inputs()[0].shape
tensor_input = preprocess(tensor_input, model_input_shape, np.float32)
tensor_output = self.model.run(None, {model_input_name: tensor_input})[0]
return yolov8_postprocess(model_input_shape, tensor_output)