mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-02-18 00:16:41 +01:00
Jetson onnxruntime (#14688)
* Add support for using onnx runtime with jetson * Update docs * Clarify
This commit is contained in:
parent
03dd9b2d42
commit
c7a4220d65
@ -10,8 +10,8 @@ ARG DEBIAN_FRONTEND
|
|||||||
# Use a separate container to build wheels to prevent build dependencies in final image
|
# Use a separate container to build wheels to prevent build dependencies in final image
|
||||||
RUN apt-get -qq update \
|
RUN apt-get -qq update \
|
||||||
&& apt-get -qq install -y --no-install-recommends \
|
&& apt-get -qq install -y --no-install-recommends \
|
||||||
python3.9 python3.9-dev \
|
python3.9 python3.9-dev \
|
||||||
wget build-essential cmake git \
|
wget build-essential cmake git \
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
# Ensure python3 defaults to python3.9
|
# Ensure python3 defaults to python3.9
|
||||||
@ -41,7 +41,8 @@ RUN --mount=type=bind,source=docker/tensorrt/detector/build_python_tensorrt.sh,t
|
|||||||
&& TENSORRT_VER=$(cat /etc/TENSORRT_VER) /deps/build_python_tensorrt.sh
|
&& TENSORRT_VER=$(cat /etc/TENSORRT_VER) /deps/build_python_tensorrt.sh
|
||||||
|
|
||||||
COPY docker/tensorrt/requirements-arm64.txt /requirements-tensorrt.txt
|
COPY docker/tensorrt/requirements-arm64.txt /requirements-tensorrt.txt
|
||||||
RUN pip3 wheel --wheel-dir=/trt-wheels -r /requirements-tensorrt.txt
|
RUN pip3 uninstall -y onnxruntime \
|
||||||
|
&& pip3 wheel --wheel-dir=/trt-wheels -r /requirements-tensorrt.txt
|
||||||
|
|
||||||
FROM build-wheels AS trt-model-wheels
|
FROM build-wheels AS trt-model-wheels
|
||||||
ARG DEBIAN_FRONTEND
|
ARG DEBIAN_FRONTEND
|
||||||
|
@ -1 +1,2 @@
|
|||||||
cuda-python == 11.7; platform_machine == 'aarch64'
|
cuda-python == 11.7; platform_machine == 'aarch64'
|
||||||
|
onnxruntime @ https://nvidia.box.com/shared/static/9aemm4grzbbkfaesg5l7fplgjtmswhj8.whl; platform_machine == 'aarch64'
|
||||||
|
@ -22,8 +22,8 @@ Frigate supports multiple different detectors that work on different types of ha
|
|||||||
- [ONNX](#onnx): OpenVINO will automatically be detected and used as a detector in the default Frigate image when a supported ONNX model is configured.
|
- [ONNX](#onnx): OpenVINO will automatically be detected and used as a detector in the default Frigate image when a supported ONNX model is configured.
|
||||||
|
|
||||||
**Nvidia**
|
**Nvidia**
|
||||||
- [TensortRT](#nvidia-tensorrt-detector): TensorRT can run on Nvidia GPUs, using one of many default models.
|
- [TensortRT](#nvidia-tensorrt-detector): TensorRT can run on Nvidia GPUs and Jetson devices, using one of many default models.
|
||||||
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured.
|
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` or `-tensorrt-jp(4/5)` Frigate images when a supported ONNX model is configured.
|
||||||
|
|
||||||
**Rockchip**
|
**Rockchip**
|
||||||
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs.
|
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs.
|
||||||
|
@ -68,6 +68,7 @@ If the correct build is used for your GPU and the `large` model is configured, t
|
|||||||
|
|
||||||
**Nvidia**
|
**Nvidia**
|
||||||
- Nvidia GPUs will automatically be detected and used as a detector in the `-tensorrt` Frigate image.
|
- Nvidia GPUs will automatically be detected and used as a detector in the `-tensorrt` Frigate image.
|
||||||
|
- Jetson devices will automatically be detected and used as a detector in the `-tensorrt-jp(4/5)` Frigate image.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user