* Update readme features
* Remove RTMP setup from setup guide
* Update integration for RTSP
* Remove rtmp from faq
* Remove RTMP stream from guide
* Remove rtmp from install
* Remove rtmp from dev config
* Add in_progress parameter to /api/events to filter the results.
* Change in_progress to default to no filtering, 0 means no in progress, 1 means only in progress.
* Fix code format with black.
* Clear blank line.
The link to the home assistant integration documentation was missing the leading slash which caused the path to be appended to the `/frigate` path of this page.
* Add missing labels to default labelmap. Fill any holes with "unknown". Remove unique labelmap for tensorrt.
* Replace "truck" with "car" on Openvino labelmap
* Add tables for ffmpeg presets and how to use them
* Make it clear that ffmepg processes may not show when nvidia-smi is run inside the container
* Add specific example of mixed input arg presets
* Update docs/docs/configuration/ffmpeg_presets.md
Co-authored-by: Nate Meyer <Nate.Devel@gmail.com>
* typos
Co-authored-by: Nate Meyer <Nate.Devel@gmail.com>
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Remove branch from URL to tensorrt_models.sh
* Reword to make TensorRT model singular
* Add note about installing nvidia docker runtime and compatible drivers
* add information about frigate plus to docs
* Update docs/docs/integrations/plus.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Add video codec to restream config
* Add handling of encode engine and video codec
* Add test for video encoding
* Set in main configuration docs as well
* Add example to restream docs
* Put back patch
* Update recommended hardware page to reflect multiple detectors are supproted
* Shift numbers around slightly
* Update with specific range
* Update with new observed range
* Add i5 example
Co-authored-by: Nate Meyer <Nate.Devel@gmail.com>
* Should support arm32 as well
Co-authored-by: Nate Meyer <Nate.Devel@gmail.com>
* Add more detail to supported platforms
* Fix typo
* Format table
* Fix table header
* Add info about tensorrt detectors and link to docs
* Add info about tensorrt detectors and link to docs
Co-authored-by: Nate Meyer <Nate.Devel@gmail.com>
* Try using RTSP for restream
* Add ability to get snapshot of birdseye when birdseye restream is enabled
* Write to pipe instead of encoding mpeg1
* Write to cache instead
* Use const for location
* Formatting
* Add hardware encoding for birdseye based on ffmpeg preset
* Provide framerate
* Adjust args
* Fix order
* Delete pipe file if it exists
* Cleanup spacing
* Fix spacing
* Initial WIP dockerfile and scripts to add tensorrt support
* Add tensorRT detector
* WIP attempt to install TensorRT 8.5
* Updates to detector for cuda python library
* TensorRT Cuda library rework WIP
Does not run
* Fixes from rebase to detector factory
* Fix parsing output memory pointer
* Handle TensorRT logs with the python logger
* Use non-async interface and convert input data to float32. Detection runs without error.
* Make TensorRT a separate build from the base Frigate image.
* Add script and documentation for generating TRT Models
* Add support for TensorRT devcontainer
* Add labelmap to trt model script and docs. Cleanup of old scripts.
* Update detect to normalize input tensor using model input type
* Add config for selecting GPU. Fix Async inference. Update documentation.
* Update some CUDA libraries to clean up version warning
* Add CI stage to build TensorRT tag
* Add note in docs for image tag and model support
* Initial work for adding OpenVino detector. Not functional
* Load model and submit for inference.
Sucessfully load model and initialize OpenVino engine with either CPU or GPU as device.
Does not parse results for objects.
* Detection working with ssdlite_mobilenetv2 FP16 model
* Add OpenVIno support and model to docker image
* Add documentation for OpenVino detector configuration
* Adds support for ARM32/ARM64 and the Myriad X hardware
- Use custom-built openvino wheel for all platforms
- Add libusb build without udev for NCS2 support
* Add documentation around Intel CPU requirements and NCS2 setup
* Print all available output tensors
* Update documentation for config parameters
* Add hwaccel presets
* Use hwaccel presets
* Add input arg presets
* Use input arg presets
* Make util to clean up redundant code
* Add support for output arg presets
* Add tests
* Update camera specific to use presets
* Update hwaccel to use presets
* Format files and fix tests
* Rewrite tests to test record correctly
* Move presets from string to list to avoid manually separating into a list
* Add mjpeg cuvid decoder preset
* Fix tests
* Fix comment
* Add option for mqtt config
* Setup communication layer
* Have a dispatcher which is responsible for handling and sending messages
* Move mqtt to communication
* Separate ws communications module
* Make ws client conform to communicator
* Cleanup imports
* Migrate to new dispatcher
* Clean up
* Need to set topic prefix
* Remove references to mqtt in dispatcher
* Don't start mqtt until dispatcher is subscribed
* Cleanup
* Shorten package
* Formatting
* Remove unused
* Cleanup
* Rename mqtt to ws on web
* Fix ws mypy
* Fix mypy
* Reformat
* Cleanup if/else chain
* Catch bad set commands
* Start restream before detection
* Add docs explaining how to reduce connections to the camera
* Fix typos for consistency
* Add link to other part of doc for readability
* Update go2rtc to rc3
* Simplify ffmpeg / audio conversions
* Set ffmpeg bin location
* Manually set video as copied
* Run go2rtc with env vars
* Remove manual ffmpeg declaration
* Enable force_audio by default
* Fix test
* Move each camera to a separate card and show per process info
* Install top
* Add support for cpu usage stats
* Use cpu usage stats in debug
* Increase number of runs to ensure good results
* Add ffprobe endpoint
* Get ffprobe for multiple inputs
* Copy ffprobe in output
* Add fps to camera metrics
* Fix lint errors
* Update stats config
* Add ffmpeg pid
* Use grid display so more cameras can take less vertical space
* Fix hanging characters
* Only show the current detector
* Fix bad if statement
* Return full output of ffprobe process
* Return full output of ffprobe process
* Don't specify rtsp_transport
* Make ffprobe button show dialog with output and option to copy
* Adjust ffprobe api to take paths directly
* Add docs for ffprobe api
* Refactor EdgeTPU and CPU model handling to detector submodules.
* Fix selecting the correct detection device type from the config
* Remove detector type check when creating ObjectDetectProcess
* Fixes after rebasing to 0.11
* Add init file to detector folder
* Rename to detect_api
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Add unit test for LocalObjectDetector class
* Add configuration for model inputs
Support transforming detection regions to RGB or BGR.
Support specifying the input tensor shape. The tensor shape has a standard format ["BHWC"] when handed to the detector, but can be transformed in the detector to match the model shape using the model input_tensor config.
* Add documentation for new model config parameters
* Add input tensor transpose to LocalObjectDetector
* Change the model input tensor config to use an enumeration
* Updates for model config documentation
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* ffmpeg stopped working without correct png size
When I made my own size for my picture, ffmpeg stopped working. Only with a size of 180x180 my frigate runs normal
* Adjusting language
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Consts for regex
* Add regex for camera username and password
* Redact user:pass from ffmpeg logs
* Redact ffmpeg commands
* Move common function to util
* Add tests
* Formatting
* Remove unused imports
* Fix test
* Add port to test
* Support special characters in passwords
* Add tests for special character handling
* Remove docs about not supporting special characters
* add option enabled for each camera in config
* Simplified If-block and removed wrong Optional
* Update Docs enabling/disabling camera in config
* correct format for option
* Disabling Camera for processes, no config changes
* Describe effects of disabled cam in documentation
* change if-logic, obsolete copy, info disabled cam
* changed color to white, added top padding in disabled camera info
* changed indentation
* Pull go2rtc dependency
* Add go2rtc to local services and add to s6
* Add relay controller for go2rtc
* Add restream role
* Add restream role
* Add restream to nginx
* Add camera live source config
* Disable RTMP by default and use restream
* Use go2rtc for camera config
* Fix go2rtc move
* Start restream on frigate start
* Send restream to camera level
* Fix restream
* Make sure jsmpeg works as expected
* Make view rspect live size config
* Tweak player options to fit live view
* Adjust VideoPlayer to accept live option which disables irrelevant controls
* Add multiple options from restream live view
* Add base for webrtc option
* Setup specific restream modules
* Make mp4 the default streaming for now
* Expose 8554 for rtsp relay from go2rtc
* Formatting
* Update docs to suggest new restream method.
* Update docs to reflect restream role
* Update docs to reflect restream role
* Add webrtc player
* Improvements to webRTC
* Support webrtc
* Cleanup
* Adjust rtmp test and add restream test
* Fix tests
* Add restream tests
* Add live view docs and show different options
* Small docs tweak
* Support all stream types
* Update to beta 9 of go2rtc
* Formatting
* Make jsmpeg the default
* Support wss if made from https
* Support wss if made from https
* Use onEffect
* Set url outside onEffect
* Fix passed deps
* Update docs about required host mode
* Try memo instead
* Close websocket on changing camera
* Formatting
* Close pc connection
* Set video source to null on cleanup
* Use full path since go2rtc can't see PATH var
* Adjust audio codec to enable browser audio by default
* Cleanup stream creation
* Add restream tests
* Format tests
* Mock requests
* Adjust paths
* Move stream configs to restream
* Remove live source
* Remove live config
* Use live persistence for which view to use on each camera
* Fix live sizes
* Only use jsmpeg sizes for jsmpeg live
* Set max live size
* Remove access of live config
* Add selector for live view source in web view
* Remove RTMP from default list of roles
* Update docs
* Fix tests
* Fix docs for live view modes
* make default undefined to avoid race condition
* Wait until camera source is loaded to avoid race condition
* Fix tests
* Add config to go2rtc
* Work with config
* Set full path for config
* Set to use stun
* Check for mounted file
* Look for frigate-go2rtc
* Update docs to reflect webRTC configuration.
* Add link to go2rtc config
* Update docs to be more clear
* Update docs to be more clear
* Update format
Co-authored-by: Felipe Santos <felipecassiors@gmail.com>
* Update live docs
* Improve bash startup script
* Add option to force audio compatibility
* Formatting
* Fix mapping
* Fix broken link
* Update go2rtc version
* Get go2rtc webui working
* Add support for mse
* Remove mp4 option
* Undo changes to video player
* Update docs for new live view options
* Make separate path for mse
* Remove unused
* Remove mp4 path
* Try to get go2rtc proxy working
* Try to get go2rtc proxy working
* Remove unused callback
* Allow websocket on restrea dashboard
* Make mse default stream option
* Fix mse sizing
* don't assume roles is defined
* Remove nginx mapping to go2rtc ui
Co-authored-by: Felipe Santos <felipecassiors@gmail.com>
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Add field and migration for segment size
* Store the segment size in db
* Add comment
* Add default
* Fix size parsing
* Include segment size in recordings endpoint
* Start adding storage maintainer
* Add storage maintainer and calculate average sizes
* Update comment
* Store segment and hour avg sizes per camera
* Formatting
* Keep track of total segment and hour averages
* Remove unused files
* Cleanup 2 hours of recordings at a time
* Formatting
* Fix bug
* Round segment size
* Cleanup some comments
* Handle case where segments are not deleted on initial run or is only retained segments
* Improve cleanup log
* Formatting
* Fix typo and improve logging
* Catch case where no recordings exist for camera
* Specifically define sort
* Handle edge case for cameras that only record part time
* Increase definition of part time recorder
* Remove warning about not supported storage based retention
* Add note about storage based retention to recording docs
* Add tests for storage maintenance calculation and cleanup
* Format tests
* Don't run for a camera with no recording segments
* Get size of file from cache
* Rework camera stats to be more efficient
* Remove total and other inefficencies
* Rewrite storage cleanup logic to be much more efficient
* Fix existing tests
* Fix bugs from tests
* Add another test
* Improve logging
* Formatting
* Set back correct loop time
* Update name
* Update comment
* Only include segments that have a nonzero size
* Catch case where camera has 0 nonzero segment durations
* Add test to cover zero bandwidth migration case
* Fix test
* Incorrect boolean logic
* Formatting
* Explicity re-define iterator
* Restructured camera specific documentation
* Make room for manufacture specific docs
* Added initial (more or less) working setup for Annke C800 camera
* Update docs/docs/configuration/camera_specific.md
remove tracking settings from example
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Moved unify and blue iris cams examples
* headline cleanup
* removed doubled headline in advanced options
* changed headline level for camera specific setup to make headlines
show up in toc
* removed specific optimizations not related to cam
* more generic phrasing
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* identation, device Id
indentation issue : "deploy" need to ne at th esame level as "image"
Device ID : use "device id" instead of "count: 1" cf : https://docs.docker.com/compose/gpu-support/
* Update docs/docs/configuration/hardware_acceleration.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Set sub label on object data if event is in progress
* Include sub_label in dict
* Don't need to set and passively get
* Formatting
* Don't expect event to be valid
* Update docs to reflect that sub label is included
* Adding configuration example for retain modes
Reading the documentation on its own didn't help me but when I found https://github.com/blakeblackshear/frigate/discussions/2447 I was able to understand how to add this to my configuration. I've added the example given in that discussion to help future readers of the page.
* Update record.md
Added suggested changes and have also added wording beneath the example mentioning the configuration can be added on a per camera basis.
Have also built on the example to add object specific retentions timings - Not sure if it would be preferred to have it all within one example to not complicate things?
Let me know your thoughts
* Update record.md
Created Object Specific Retention header
* Typo
Co-authored-by: Blake Blackshear <blake@frigate.video>
* Send mqtt message when motion is detected
* Use object processing instead of passing mqtt client around
* Cleanup
* Formatting
* add comment
* Make off delay configurable.
* Handle updating each camera based on config off delay
* Formatting
* Update docker-compose.yml
* Fix processing issue
* Update mqtt docs
* Update main config docs
* Make sure multiple True values aren't published for the same motion
* Make sure multiple True values aren't published for the same motion
* Update payload to fit existing HA standard values
* Update docs to fit new values
* Update docs
* Update motion topic
* Use datetime.datetime and remove unused imports
* Cast to int
* Clarify motion detector behavior in docs
* Fix typo
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Starting working on adding motion toggle
* Add all info to mqtt command
* Send motion to correct funs
* Update mqtt docs
* Fixes for contingencies
* format
* mypy
* Tweak behavior
* Fix motion breaking frames
* Fix bad logic in detect set
* Always set value for motion boxes
* Toggle improve_contrast for cameras via MQTT
* Process parameter to mqtt toggle improve_contrast
* Update mqtt docs for improve_contrast topic
* Spacing
* Add class variable and update in process_frames
* Pass to constructor
* pass by reference mistake
* remove parameter
* remove parameter
* Add options for reordering and hiding cameras selectively
* Add newline at end of camera file
* Make each camera for birdseye togglable as well
* Update names to be less ambiguous
* Update defaults
* Include sidebar change
* Remove birdseye toggle (will be added in separate PR)
* Remove birdseye toggle (will be added in separate PR)
* Remove birdseye toggle (will be added in separate PR)
* Update sidebar to only sort cameras once
* Simplify sorting logic
* Add camera level processing for birdseye
* Add camera level birdseye configruation
* Propogate birdseye from global
* Update docs to show that birdseye is overridable
* Fix incorrect default factory
* Update note to indicate values that can be overridden
* Cleanup config accessing
* Add tests for birdseye config behavior
* Fix mistake on test format
* Update tests
* Add docs to elaborate more on stationary tracking
* Add link to guide on avoiding stationary objects in driveway scenario
* Update wording in reference config
* Small cleanups
* Update with PR comments
* Add object ratio config parameters
Issue: #2948
* Add config test for object filter ratios
Issue: #2948
* Address review comments
- Accept `ratio` default
- Rename `bounds` to `box` for consistency
- Add migration for new field
Issue: #2948
* Fix logical errors
- field migrations require default values
- `clipped` referenced the wrong index for region, since it shifted
- missed an inclusion of `ratio` for detections in `process_frames`
- revert naming `o[2]` as `box` since it is out of scope!
This has now been test-run against a video, so I believe the kinks are
worked out.
Issue: #2948
* Update contributing notes for `make`
Issue: #2948
* Fix migration
- Ensure that defaults match between Event and migration script
- Deconflict migration script number (from rebase)
Issue: #2948
* Filter objects out of ratio bounds
Issue: #2948
* Update migration file to 009
Issue: #2948
* Add sub label to model and set / delete funs
* Add migrations for sub label
* Tweaks to API and model
* Show sublabel if available
* Cleanups
* Update docs
* Show person in UI title
* Fix typo and don't fail on no json
* Transfer sub labels for in progress events
* Remove sublabel reset
* Remove person only check
* Make default null
* Update docs and formatting
* Make default null
* Make nullable in migration
* Undo null
* Update model to accept null
* Update migration to accept null
* Don't set to default values
* Remove redundant defaults and update http logic
* Only need a single route
* Enforce 20 character limit in http
* Update docs to mention 20 character limit
* Cleanup
* Separate insert and update to make sure updated values are retained when event ends
* Use insert instead of replace
* Remove redundant if and have should_update_db include clip or snapshot requirement.
* Added object thumbnail def and made camera tracked objects use it.
* Add object snapshot def
* Remove documentation for best.jpg
* Update docs for label thumbnail and snapshot defs
* Improve audio convert guide
* Mention faq in RTMP configuration
* Add example for audio conversion tip
* Change comma to period
* Explain why this is needed
Based on issue #1976 - specify explicitly that these fields can include environment variables to avoid interpretation that environment variables could be used anywhere.
I am participating in #hacktoberfest, so I would appreciate if you could add the 'hacktoberfest-accepted' label (or add #hacktoberfest topic to your repo). Thanks!
Hoped to investigate this with my dev board at some point. In the meantime, added a warning for others who may experience it when upgrading to the new stable release.
* 📝✅🔧 - Make RTMP config global
Fixes#1671
* 📝✅🔧 - Make timestamp style config global
Fixes#1656
* fix test function names
* formatter
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
In the homeassistant app, the notification timestamp is generated when the push message is received by the app. Delays caused by servers, device load, or network latency/availability will delay those pushes - so in the following case:
1:00 - A dog is detected in the front
1:02 - It stops moving around or leaves view, last notification push sent
1:05 - The phone connects to the network
The user, seeing the alert at 1:05, will see that the notification occurred "a few seconds ago", since the timestamp the app sends to the OS was at 1:05. By adding the `when` parameter, it will instead correctly show that the event was triggered at 1:00.
This is exacerbated by the fact that the default behavior of android pushes won't wake the device from deep sleep - in order to receive it as a high priority notification, the additional parameters
```
data:
priority: high
ttl: 0
```
have to be added.
* Add FAQ section
Add FAQ section and verbiage about a finding with camera motion sensors in HomeKit.
* Changes made based on inputs
* Fix markdown
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
Refs #1440
Indicate that width and height are only used for the detect role and so other streams for with other roles are passed through and resolution is not needed.
It is not possible--unless I'm totally overlooking something--to define the add-on's configuration in the GUI. The user must define the configuration in a frigate.yml file,