blakeblackshear.frigate/README.md

99 lines
3.9 KiB
Markdown
Raw Permalink Normal View History

2019-03-03 17:04:27 +01:00
# Frigate - Realtime Object Detection for RTSP Cameras
2019-03-30 13:58:31 +01:00
**Note:** This version requires the use of a [Google Coral USB Accelerator](https://coral.withgoogle.com/products/accelerator/)
2019-03-03 17:04:27 +01:00
Uses OpenCV and Tensorflow to perform realtime object detection locally for RTSP cameras. Designed for integration with HomeAssistant or others via MQTT.
- Leverages multiprocessing and threads heavily with an emphasis on realtime over processing every frame
2019-03-30 13:58:31 +01:00
- Allows you to define specific regions (squares) in the image to look for objects
- No motion detection (for now)
- Object detection with Tensorflow runs in a separate thread
- Object info is published over MQTT for integration into HomeAssistant as a binary sensor
2019-03-03 17:04:27 +01:00
- An endpoint is available to view an MJPEG stream for debugging
![Diagram](diagram.png)
2019-03-30 13:58:31 +01:00
## Example video (from older version)
2019-03-03 17:04:27 +01:00
You see multiple bounding boxes because it draws bounding boxes from all frames in the past 1 second where a person was detected. Not all of the bounding boxes were from the current frame.
[![](http://img.youtube.com/vi/nqHbCtyo4dY/0.jpg)](http://www.youtube.com/watch?v=nqHbCtyo4dY "Frigate")
2019-01-26 15:02:59 +01:00
## Getting Started
Build the container with
```
docker build -t frigate .
2019-01-26 15:02:59 +01:00
```
2019-03-30 13:58:31 +01:00
The `mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite` model is included and used by default. You can use your own model and labels by mounting files in the container at `/frozen_inference_graph.pb` and `/label_map.pbtext`. Models must be compatible with the Coral according to [this](https://coral.withgoogle.com/models/).
2019-01-26 15:02:59 +01:00
Run the container with
```
2019-02-10 21:25:30 +01:00
docker run --rm \
2019-03-30 13:58:31 +01:00
--privileged \
-v /dev/bus/usb:/dev/bus/usb \
2019-03-06 04:42:09 +01:00
-v <path_to_config_dir>:/config:ro \
2019-01-26 15:02:59 +01:00
-p 5000:5000 \
2019-03-30 13:58:31 +01:00
-e RTSP_PASSWORD='password' \
frigate:latest
2019-01-26 15:02:59 +01:00
```
2019-03-03 17:04:27 +01:00
Example docker-compose:
2019-02-28 13:49:27 +01:00
```
frigate:
container_name: frigate
restart: unless-stopped
2019-03-30 13:58:31 +01:00
privileged: true
image: frigate:latest
2019-02-28 13:49:27 +01:00
volumes:
2019-03-30 13:58:31 +01:00
- /dev/bus/usb:/dev/bus/usb
2019-02-28 13:49:27 +01:00
- <path_to_config>:/config
ports:
2019-03-30 13:58:31 +01:00
- "5000:5000"
2019-02-28 13:49:27 +01:00
environment:
2019-03-30 13:58:31 +01:00
RTSP_PASSWORD: "password"
2019-02-28 13:49:27 +01:00
```
2019-03-30 13:58:31 +01:00
A `config.yml` file must exist in the `config` directory. See example [here](config/config.yml).
2019-03-06 04:42:09 +01:00
2019-03-30 13:58:31 +01:00
Access the mjpeg stream at `http://localhost:5000/<camera_name>` and the best person snapshot at `http://localhost:5000/<camera_name>/best_person.jpg`
2019-01-26 15:02:59 +01:00
2019-03-03 17:04:27 +01:00
## Integration with HomeAssistant
```
camera:
- name: Camera Last Person
platform: generic
2019-03-30 13:58:31 +01:00
still_image_url: http://<ip>:5000/<camera_name>/best_person.jpg
2019-03-03 17:04:27 +01:00
binary_sensor:
2019-03-30 13:58:31 +01:00
- name: Camera Person
2019-03-03 17:04:27 +01:00
platform: mqtt
2019-03-30 13:58:31 +01:00
state_topic: "frigate/<camera_name>/objects"
2019-03-03 17:04:27 +01:00
value_template: '{{ value_json.person }}'
device_class: motion
2019-03-30 13:58:31 +01:00
availability_topic: "frigate/available"
2019-03-03 17:04:27 +01:00
```
2019-02-04 14:10:42 +01:00
## Tips
2019-03-03 17:04:27 +01:00
- Lower the framerate of the RTSP feed on the camera to reduce the CPU usage for capturing the feed
2019-02-04 14:10:42 +01:00
2019-01-26 15:02:59 +01:00
## Future improvements
2019-03-27 12:55:32 +01:00
- [x] Remove motion detection for now
2019-03-30 13:58:31 +01:00
- [x] Try running object detection in a thread rather than a process
2019-03-27 12:55:32 +01:00
- [x] Implement min person size again
2019-03-30 13:58:31 +01:00
- [x] Switch to a config file
- [x] Handle multiple cameras in the same container
- [ ] Attempt to figure out coral symlinking
- [ ] Add object list to config with min scores for mqtt
- [ ] Move mjpeg encoding to a separate process
- [ ] Simplify motion detection (check entire image against mask, resize instead of gaussian blur)
2019-03-27 12:55:32 +01:00
- [ ] See if motion detection is even worth running
- [ ] Scan for people across entire image rather than specfic regions
- [ ] Dynamically resize detection area and follow people
- [ ] Add ability to turn detection on and off via MQTT
2019-02-25 13:48:31 +01:00
- [ ] Output movie clips of people for notifications, etc.
2019-02-28 13:30:34 +01:00
- [ ] Integrate with homeassistant push camera
2019-02-10 21:25:30 +01:00
- [ ] Merge bounding boxes that span multiple regions
2019-02-25 13:48:31 +01:00
- [ ] Implement mode to save labeled objects for training
2019-02-10 21:25:30 +01:00
- [ ] Try and reduce CPU usage by simplifying the tensorflow model to just include the objects we care about
2019-02-09 14:23:54 +01:00
- [ ] Look into GPU accelerated decoding of RTSP stream
- [ ] Send video over a socket and use JSMPEG
2019-03-27 12:55:32 +01:00
- [x] Look into neural compute stick