mirror of
https://github.com/blakeblackshear/frigate.git
synced 2024-12-23 19:11:14 +01:00
dd02958f7c
* Update to latest tensorrt (8.6.1) release * Build trt libyolo_layer.so in container * Update tensorrt_models script to convert models from the frigate container * Fix typo in model script * Fix paths to yolo lib and models folder * Add S6 scripts to test and convert specified TensortRT models at startup. Rearrange tensorrt files into a docker support folder. * Update TensorRT documentation to reflect the new model conversion process and minimum HW support. * Fix model_cache path to live in config directory * Move tensorrt s6 files to the correct directory * Fix issues in model generation script * Disable global timeout for s6 services * Add version folder to tensorrt model_cache path * Include TensorRT version 8.5.3 * Add numpy requirement prior to removal of np.bool * This TRT version uses a mixture of cuda dependencies * Redirect stdout from noisy model conversion
12 lines
613 B
Plaintext
12 lines
613 B
Plaintext
# NVidia TensorRT Support (amd64 only)
|
|
--extra-index-url 'https://pypi.nvidia.com'
|
|
numpy < 1.24; platform_machine == 'x86_64'
|
|
tensorrt == 8.5.3.*; platform_machine == 'x86_64'
|
|
cuda-python == 11.8; platform_machine == 'x86_64'
|
|
cython == 0.29.*; platform_machine == 'x86_64'
|
|
nvidia-cuda-runtime-cu12 == 12.1.*; platform_machine == 'x86_64'
|
|
nvidia-cuda-runtime-cu11 == 11.8.*; platform_machine == 'x86_64'
|
|
nvidia-cublas-cu11 == 11.11.3.6; platform_machine == 'x86_64'
|
|
nvidia-cudnn-cu11 == 8.6.0.*; platform_machine == 'x86_64'
|
|
onnx==1.14.0; platform_machine == 'x86_64'
|
|
protobuf==3.20.3; platform_machine == 'x86_64' |