Add support for selecting a specific GPU to use when converting TRT models (#7857)

This commit is contained in:
Nate Meyer 2023-09-21 06:23:51 -04:00 committed by GitHub
parent 5d30944d6e
commit 0d6bb6714a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 26 additions and 0 deletions

View File

@ -43,6 +43,15 @@ if [[ -z ${MODEL_CONVERT} ]]; then
exit 0 exit 0
fi fi
# Setup ENV to select GPU for conversion
if [ ! -z ${TRT_MODEL_PREP_DEVICE+x} ]; then
if [ ! -z ${CUDA_VISIBLE_DEVICES+x} ]; then
PREVIOUS_CVD="$CUDA_VISIBLE_DEVICES"
unset CUDA_VISIBLE_DEVICES
fi
export CUDA_VISIBLE_DEVICES="$TRT_MODEL_PREP_DEVICE"
fi
# On Jetpack 4.6, the nvidia container runtime will mount several host nvidia libraries into the # On Jetpack 4.6, the nvidia container runtime will mount several host nvidia libraries into the
# container which should not be present in the image - if they are, TRT model generation will # container which should not be present in the image - if they are, TRT model generation will
# fail or produce invalid models. Thus we must request the user to install them on the host in # fail or produce invalid models. Thus we must request the user to install them on the host in
@ -87,5 +96,14 @@ do
echo "Generated ${model}.trt in $(($(date +%s)-start)) seconds" echo "Generated ${model}.trt in $(($(date +%s)-start)) seconds"
done done
# Restore ENV after conversion
if [ ! -z ${TRT_MODEL_PREP_DEVICE+x} ]; then
unset CUDA_VISIBLE_DEVICES
if [ ! -z ${PREVIOUS_CVD+x} ]; then
export CUDA_VISIBLE_DEVICES="$PREVIOUS_CVD"
fi
fi
# Print which models exist in output folder
echo "Available tensorrt models:" echo "Available tensorrt models:"
cd ${OUTPUT_FOLDER} && ls *.trt; cd ${OUTPUT_FOLDER} && ls *.trt;

View File

@ -239,6 +239,14 @@ frigate:
- USE_FP16=false - USE_FP16=false
``` ```
If you have multiple GPUs passed through to Frigate, you can specify which one to use for the model conversion. The conversion script will use the first visible GPU, however in systems with mixed GPU models you may not want to use the default index for object detection. Add the `TRT_MODEL_PREP_DEVICE` environment variable to select a specific GPU.
```yml
frigate:
environment:
- TRT_MODEL_PREP_DEVICE=0 # Optionally, select which GPU is used for model optimization
```
### Configuration Parameters ### Configuration Parameters
The TensorRT detector can be selected by specifying `tensorrt` as the model type. The GPU will need to be passed through to the docker container using the same methods described in the [Hardware Acceleration](hardware_acceleration.md#nvidia-gpu) section. If you pass through multiple GPUs, you can select which GPU is used for a detector with the `device` configuration parameter. The `device` parameter is an integer value of the GPU index, as shown by `nvidia-smi` within the container. The TensorRT detector can be selected by specifying `tensorrt` as the model type. The GPU will need to be passed through to the docker container using the same methods described in the [Hardware Acceleration](hardware_acceleration.md#nvidia-gpu) section. If you pass through multiple GPUs, you can select which GPU is used for a detector with the `device` configuration parameter. The `device` parameter is an integer value of the GPU index, as shown by `nvidia-smi` within the container.