Add nvidia detector inference times from survey (#5456)

* Add nvidia detector inference times from survey

* Fix typo

* Update hardware.md
This commit is contained in:
Nicolas Mowen 2023-02-11 18:59:36 -07:00 committed by GitHub
parent 27a31e731f
commit c74c9ff161
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -80,10 +80,15 @@ The TensortRT detector is able to run on x86 hosts that have an Nvidia GPU which
Inference speeds will vary greatly depending on the GPU and the model used. Inference speeds will vary greatly depending on the GPU and the model used.
`tiny` variants are faster than the equivalent non-tiny model, some known examples are below: `tiny` variants are faster than the equivalent non-tiny model, some known examples are below:
| Name | Model | Inference Speed | | Name | Inference Speed |
| -------- | --------------- | --------------- | | --------------- | ----------------- |
| RTX 3050 | yolov4-tiny-416 | ~ 5 ms | | GTX 1060 6GB | ~ 7 ms |
| RTX 3050 | yolov7-tiny-416 | ~ 6 ms | | GTX 1070 | ~ 6 ms |
| GTX 1660 SUPER | ~ 4 ms |
| RTX 3050 | 5 - 7 ms |
| RTX 3070 Mobile | ~ 5 ms |
| Quadro P400 2GB | 20 - 25 ms |
| Quadro P2000 | ~ 12 ms |
## What does Frigate use the CPU for and what does it use a detector for? (ELI5 Version) ## What does Frigate use the CPU for and what does it use a detector for? (ELI5 Version)