mirror of
https://github.com/blakeblackshear/frigate.git
synced 2024-12-23 19:11:14 +01:00
Update recommended hardware page to reflect multiple detectors (#4746)
* Update recommended hardware page to reflect multiple detectors are supproted * Shift numbers around slightly * Update with specific range * Update with new observed range * Add i5 example Co-authored-by: Nate Meyer <Nate.Devel@gmail.com> * Should support arm32 as well Co-authored-by: Nate Meyer <Nate.Devel@gmail.com> * Add more detail to supported platforms * Fix typo * Format table * Fix table header * Add info about tensorrt detectors and link to docs * Add info about tensorrt detectors and link to docs Co-authored-by: Nate Meyer <Nate.Devel@gmail.com>
This commit is contained in:
parent
e4a79b12eb
commit
9c6193b9b5
@ -23,7 +23,7 @@ I may earn a small commission for my endorsement, recommendation, testimonial, o
|
|||||||
|
|
||||||
My current favorite is the Minisforum GK41 because of the dual NICs that allow you to setup a dedicated private network for your cameras where they can be blocked from accessing the internet. There are many used workstation options on eBay that work very well. Anything with an Intel CPU and capable of running Debian should work fine. As a bonus, you may want to look for devices with a M.2 or PCIe express slot that is compatible with the Google Coral. I may earn a small commission for my endorsement, recommendation, testimonial, or link to any products or services from this website.
|
My current favorite is the Minisforum GK41 because of the dual NICs that allow you to setup a dedicated private network for your cameras where they can be blocked from accessing the internet. There are many used workstation options on eBay that work very well. Anything with an Intel CPU and capable of running Debian should work fine. As a bonus, you may want to look for devices with a M.2 or PCIe express slot that is compatible with the Google Coral. I may earn a small commission for my endorsement, recommendation, testimonial, or link to any products or services from this website.
|
||||||
|
|
||||||
| Name | Inference Speed | Coral Compatibility | Notes |
|
| Name | Coral Inference Speed | Coral Compatibility | Notes |
|
||||||
| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | ------------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
|
| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | ------------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| Odyssey X86 Blue J4125 (<a href="https://amzn.to/3oH4BKi" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) (<a href="https://www.seeedstudio.com/Frigate-NVR-with-Odyssey-Blue-and-Coral-USB-Accelerator.html?utm_source=Frigate" target="_blank" rel="nofollow noopener sponsored">SeeedStudio</a>) | 9-10ms | M.2 B+M, USB | Dual gigabit NICs for easy isolated camera network. Easily handles several 1080p cameras. |
|
| Odyssey X86 Blue J4125 (<a href="https://amzn.to/3oH4BKi" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) (<a href="https://www.seeedstudio.com/Frigate-NVR-with-Odyssey-Blue-and-Coral-USB-Accelerator.html?utm_source=Frigate" target="_blank" rel="nofollow noopener sponsored">SeeedStudio</a>) | 9-10ms | M.2 B+M, USB | Dual gigabit NICs for easy isolated camera network. Easily handles several 1080p cameras. |
|
||||||
| Minisforum GK41 (<a href="https://amzn.to/3ptnb8D" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 9-10ms | USB | Dual gigabit NICs for easy isolated camera network. Easily handles several 1080p cameras. |
|
| Minisforum GK41 (<a href="https://amzn.to/3ptnb8D" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 9-10ms | USB | Dual gigabit NICs for easy isolated camera network. Easily handles several 1080p cameras. |
|
||||||
@ -33,9 +33,13 @@ My current favorite is the Minisforum GK41 because of the dual NICs that allow y
|
|||||||
| Atomic Pi (<a href="https://amzn.to/2YjpY9m" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 16ms | USB | Good option for a dedicated low power board with a small number of cameras. Can leverage Intel QuickSync for stream decoding. |
|
| Atomic Pi (<a href="https://amzn.to/2YjpY9m" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 16ms | USB | Good option for a dedicated low power board with a small number of cameras. Can leverage Intel QuickSync for stream decoding. |
|
||||||
| Raspberry Pi 4 (64bit) (<a href="https://amzn.to/2YhSGHH" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 10-15ms | USB | Can handle a small number of cameras. |
|
| Raspberry Pi 4 (64bit) (<a href="https://amzn.to/2YhSGHH" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 10-15ms | USB | Can handle a small number of cameras. |
|
||||||
|
|
||||||
## Google Coral TPU
|
## Detectors
|
||||||
|
|
||||||
It is strongly recommended to use a Google Coral. Frigate is designed around the expectation that a Coral is used to achieve very low inference speeds. Offloading TensorFlow to the Google Coral is an order of magnitude faster and will reduce your CPU load dramatically. A $60 device will outperform $2000 CPU. Frigate should work with any supported Coral device from https://coral.ai
|
A detector is a device which is optimized for running inferences efficiently to detect objects. Using a recommended detector means there will be less latency between detections and more detections can be run per second. Frigate is designed around the epectation that a detector is used to achieve very low inference speeds. Offloading TensorFlow to a detector is an order of magnitude faster and will reduce your CPU load dramatically. As of 0.12, Frigate supports a handful of different detector types with varying inference speeds and performance.
|
||||||
|
|
||||||
|
### Google Coral TPU
|
||||||
|
|
||||||
|
It is strongly recommended to use a Google Coral. A $60 device will outperform $2000 CPU. Frigate should work with any supported Coral device from https://coral.ai
|
||||||
|
|
||||||
The USB version is compatible with the widest variety of hardware and does not require a driver on the host machine. However, it does lack the automatic throttling features of the other versions.
|
The USB version is compatible with the widest variety of hardware and does not require a driver on the host machine. However, it does lack the automatic throttling features of the other versions.
|
||||||
|
|
||||||
@ -43,7 +47,36 @@ The PCIe and M.2 versions require installation of a driver on the host. Follow t
|
|||||||
|
|
||||||
A single Coral can handle many cameras and will be sufficient for the majority of users. You can calculate the maximum performance of your Coral based on the inference speed reported by Frigate. With an inference speed of 10, your Coral will top out at `1000/10=100`, or 100 frames per second. If your detection fps is regularly getting close to that, you should first consider tuning motion masks. If those are already properly configured, a second Coral may be needed.
|
A single Coral can handle many cameras and will be sufficient for the majority of users. You can calculate the maximum performance of your Coral based on the inference speed reported by Frigate. With an inference speed of 10, your Coral will top out at `1000/10=100`, or 100 frames per second. If your detection fps is regularly getting close to that, you should first consider tuning motion masks. If those are already properly configured, a second Coral may be needed.
|
||||||
|
|
||||||
### What does Frigate use the CPU for and what does it use the Coral for? (ELI5 Version)
|
### OpenVino
|
||||||
|
|
||||||
|
The OpenVINO detector type is able to run on:
|
||||||
|
- 6th Gen Intel Platforms and newer that have an iGPU
|
||||||
|
- x86 & Arm32/64 hosts with VPU Hardware (ex: Intel NCS2)
|
||||||
|
|
||||||
|
More information is available [in the detector docs](/configuration/detectors#openvino-detector)
|
||||||
|
|
||||||
|
Inference speeds vary greatly depending on the CPU, GPU, or VPU used, some known examples are below:
|
||||||
|
|
||||||
|
| Name | Inference Speed | Notes |
|
||||||
|
| ------------------- | --------------- | --------------------------------------------------------------------- |
|
||||||
|
| Intel Celeron J4105 | ~ 25 ms | Inference speeds on CPU were ~ 150 ms |
|
||||||
|
| Intel Celeron N4020 | 50 - 200 ms | Inference speeds on CPU were ~ 800 ms, greatly depends on other loads |
|
||||||
|
| Intel NCS2 VPU | 60 - 65 ms | May vary based on host device |
|
||||||
|
| Intel i5 1135G7 | 10 - 15 ms | |
|
||||||
|
|
||||||
|
### TensorRT
|
||||||
|
|
||||||
|
The TensortRT detector is able to run on x86 hosts that have an Nvidia GPU which supports the 11.x series of CUDA libraries. The minimum driver version on the host system must be `>=450.80.02`. Also the GPU must support a Compute Capability of `5.0` or greater. This generally correlates to a Maxwell-era GPU or newer, check the [TensorRT docs for more info](/configuration/detectors#nvidia-tensorrt-detector).
|
||||||
|
|
||||||
|
Inference speeds will vary greatly depending on the GPU and the model used.
|
||||||
|
`tiny` variants are faster than the equivalent non-tiny model, some known examples are below:
|
||||||
|
|
||||||
|
| Name | Model | Inference Speed |
|
||||||
|
| -------- | --------------- | --------------- |
|
||||||
|
| RTX 3050 | yolov4-tiny-416 | ~ 5 ms |
|
||||||
|
| RTX 3050 | yolov7-tiny-416 | ~ 6 ms |
|
||||||
|
|
||||||
|
## What does Frigate use the CPU for and what does it use a detector for? (ELI5 Version)
|
||||||
|
|
||||||
This is taken from a [user question on reddit](https://www.reddit.com/r/homeassistant/comments/q8mgau/comment/hgqbxh5/?utm_source=share&utm_medium=web2x&context=3). Modified slightly for clarity.
|
This is taken from a [user question on reddit](https://www.reddit.com/r/homeassistant/comments/q8mgau/comment/hgqbxh5/?utm_source=share&utm_medium=web2x&context=3). Modified slightly for clarity.
|
||||||
|
|
||||||
@ -59,7 +92,7 @@ However we realize that there is a problem. There is still booby poop all over t
|
|||||||
|
|
||||||
Basically - When you increase the resolution and/or the frame rate of the stream there is now significantly more data for the CPU to parse. That takes additional computing power. The Google Coral is really good at doing object detection, but it doesn't have time to look everywhere all the time (especially when there are many windows to check). To balance it, Frigate uses the CPU to look for movement, then sends those frames to the Coral to do object detection. This allows the Coral to be available to a large number of cameras and not overload it.
|
Basically - When you increase the resolution and/or the frame rate of the stream there is now significantly more data for the CPU to parse. That takes additional computing power. The Google Coral is really good at doing object detection, but it doesn't have time to look everywhere all the time (especially when there are many windows to check). To balance it, Frigate uses the CPU to look for movement, then sends those frames to the Coral to do object detection. This allows the Coral to be available to a large number of cameras and not overload it.
|
||||||
|
|
||||||
### Do hwaccel args help if I am using a Coral?
|
## Do hwaccel args help if I am using a Coral?
|
||||||
|
|
||||||
YES! The Coral does not help with decoding video streams.
|
YES! The Coral does not help with decoding video streams.
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user