NVR with realtime local object detection for IP cameras
Go to file
2020-01-02 07:39:57 -06:00
.github Update sponsorship option 2019-12-11 06:35:17 -06:00
config track and report all detected object types 2020-01-02 07:39:57 -06:00
docs change the ffmpeg config for global defaults and overrides 2019-12-08 16:03:23 -06:00
frigate add color map to use different colors for different objects 2020-01-02 07:39:57 -06:00
scripts added missing scripts 2019-07-30 19:11:22 -05:00
.dockerignore update dockerignore and debug option 2019-07-30 19:11:22 -05:00
.gitignore move config example 2019-12-08 07:06:52 -06:00
benchmark.py add a benchmark script 2019-07-30 19:11:22 -05:00
detect_objects.py slow down the preview feed to lower cpu usage 2020-01-02 07:39:57 -06:00
diagram.png update diagram 2019-03-30 08:22:41 -05:00
Dockerfile add color map to use different colors for different objects 2020-01-02 07:39:57 -06:00
LICENSE initial commit 2019-01-26 08:02:59 -06:00
README.md track and report all detected object types 2020-01-02 07:39:57 -06:00

Frigate - Realtime Object Detection for IP Cameras

Note: This version requires the use of a Google Coral USB Accelerator

Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras. Designed for integration with HomeAssistant or others via MQTT.

  • Leverages multiprocessing and threads heavily with an emphasis on realtime over processing every frame
  • Allows you to define specific regions (squares) in the image to look for objects
  • No motion detection (for now)
  • Object detection with Tensorflow runs in a separate thread
  • Object info is published over MQTT for integration into HomeAssistant as a binary sensor
  • An endpoint is available to view an MJPEG stream for debugging

Diagram

Example video (from older version)

You see multiple bounding boxes because it draws bounding boxes from all frames in the past 1 second where a person was detected. Not all of the bounding boxes were from the current frame.

Getting Started

Build the container with

docker build -t frigate .

The mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite model is included and used by default. You can use your own model and labels by mounting files in the container at /frozen_inference_graph.pb and /label_map.pbtext. Models must be compatible with the Coral according to this.

Run the container with

docker run --rm \
--privileged \
-v /dev/bus/usb:/dev/bus/usb \
-v <path_to_config_dir>:/config:ro \
-v /etc/localtime:/etc/localtime:ro \
-p 5000:5000 \
-e FRIGATE_RTSP_PASSWORD='password' \
frigate:latest

Example docker-compose:

  frigate:
    container_name: frigate
    restart: unless-stopped
    privileged: true
    image: frigate:latest
    volumes:
      - /dev/bus/usb:/dev/bus/usb
      - /etc/localtime:/etc/localtime:ro
      - <path_to_config>:/config
    ports:
      - "5000:5000"
    environment:
      FRIGATE_RTSP_PASSWORD: "password"

A config.yml file must exist in the config directory. See example here and device specific info can be found here.

Access the mjpeg stream at http://localhost:5000/<camera_name> and the best snapshot for any object type with at http://localhost:5000/<camera_name>/<object_name>/best.jpg

Integration with HomeAssistant

camera:
  - name: Camera Last Person
    platform: mqtt
    topic: frigate/<camera_name>/person/snapshot
  - name: Camera Last Car
    platform: mqtt
    topic: frigate/<camera_name>/car/snapshot

binary_sensor:
  - name: Camera Person
    platform: mqtt
    state_topic: "frigate/<camera_name>/person"
    device_class: motion
    availability_topic: "frigate/available"

automation:
  - alias: Alert me if a person is detected while armed away
    trigger: 
      platform: state
      entity_id: binary_sensor.camera_person
      from: 'off'
      to: 'on'
    condition:
      - condition: state
        entity_id: alarm_control_panel.home_alarm
        state: armed_away
    action:
      - service: notify.user_telegram
        data:
          message: "A person was detected."
          data:
            photo:
              - url: http://<ip>:5000/<camera_name>/person/best.jpg
                caption: A person was detected.

Tips

  • Lower the framerate of the video feed on the camera to reduce the CPU usage for capturing the feed

Future improvements

  • Remove motion detection for now
  • Try running object detection in a thread rather than a process
  • Implement min person size again
  • Switch to a config file
  • Handle multiple cameras in the same container
  • Attempt to figure out coral symlinking
  • Add object list to config with min scores for mqtt
  • Move mjpeg encoding to a separate process
  • Simplify motion detection (check entire image against mask, resize instead of gaussian blur)
  • See if motion detection is even worth running
  • Scan for people across entire image rather than specfic regions
  • Dynamically resize detection area and follow people
  • Add ability to turn detection on and off via MQTT
  • Output movie clips of people for notifications, etc.
  • Integrate with homeassistant push camera
  • Merge bounding boxes that span multiple regions
  • Implement mode to save labeled objects for training
  • Try and reduce CPU usage by simplifying the tensorflow model to just include the objects we care about
  • Look into GPU accelerated decoding of RTSP stream
  • Send video over a socket and use JSMPEG
  • Look into neural compute stick