* Use zmq for inter process communication
* Use localhost for reply and request
* Use pyobj instead of json and Need to use separate requestors for each audio listener
* Cleanup port defining
* Generate low res low fps previews for recordings viewer
* Make sure previews end on the hour
* Fix durations and decrase keyframe interval to ensure smooth scrubbing
* Ensure minimized resolution is compatible with yuv
* Add ability to configure preview quality
* Fix
* Clean up previews more efficiently
* Use iterator
* optimize motion and velocity estimation
* change recommended fps and fix config validate
* remove unneeded var
* process at most 3 objects per second
* fix test
* zoom in/out in search for lost objects
* predicted box should not be empty
* clean up and update zoom logic
* only zoom if enabled
* more cleanup
* check for valid velocity when zooming
* only try absolute zoom in if obj area has changed
* zoom logic
* don't enqueue lost object zoom if already at limit
* don't disable motion boxes during ptz moves
* velocity threshold based on move coefficients
* fix area zoom logic
* disable debug zoom
* don't process objects if ptz moving
* recalc with exponent
* change exponent
* remove lost object zooming
* increase distance threshold for stationary object
* increase distance threshold constant
* only zoom out if nonzero
* camera name in all debug logging
* add camera name to debug logging
* camera variable name consistency
* update calibration behavior and docs
* docs and better zooming
* more sensible target values
* docs wording
* fix velocity threshold variable
* zooming tweaks and remove iou for current objects
* debug and docs
* get valid velocity
* include zero
* additional debug statements
* add zoom hysteresis
* zoom on initial move if relative
* only update target box if we actually zoom
* merge dev
* use getattr instead of get
* increase distance threshold
* reverse logic
* get_camera_status after preset move to store zoom
* final tweaks and docs
* use constants and catch possible debug exception
* adjust zoom factor exponent
* don't run motion estimation when calling preset
* adjust dimension threshold
* use numpy for velocity estimate calcs
* more numpy conversion
* fix numpy shapes
* numpy zeros dimension
* more zoom out conditions
* fix velocity bug
* ensure init has been called in debug view
* ensure onvif init if enabling by mqtt
* change default hysteresis values
* recalc relative zoom value
* zoom out value
* try to zoom when object isn't moving
* try zoom when tracked object is not moving
* don't try to zoom every time
* negate zoom out condition when needed
* hysteresis constants for absolute zooming
* update zoom conditions
* don't recalc target box on zoom only
* zoom out if above area threshold
* don't print zooming debug for stationary obj
* revamp zooming to use area moving average
* zooming tweaks and expose property
* limit zoom with max target box
* use calibration to determine zoom levels
* zoom logic fix
* docs
* add tapo c200 camera
* fix initial absolute zoom
* small zoom logic fix
* better invalid velocity checks
* fix test
* really fix test this time
* don't zoom if camera doesn't support it
* basic zooming
* make zooming configurable
* zooming docs
* optional zooming in camera status
* Use absolute instead of relative zooming
* increase edge threshold
* zoom considering object area
* bugfixes
* catch onvif zooming errors
* relative zooming option for dahua/amcrest cams
* docs
* docs
* don't make small movements
* remove old logger statement
* fix small movements
* use enum in config for zooming
* fix formatting
* empty move queue first
* clear tracked object before waiting for stop
* use velocity estimation for movements
* docs updates
* add tests
* typos
* recalc every 50 moves
* adjust zoom based on estimate box if calibrated
* tweaks for fast objects and large movements
* use real time for calibration and add info logging
* docs updates
* remove area scale
* Add example video to docs
* zooming font header size the same as the others
* log an error if a ptz doesn't report a MoveStatus
* debug logging for onvif service capabilities
* ensure camera supports ONVIF MoveStatus
* Add args to ignore audio and only process keyframes
* Add timelapse args to config
* Update docs
* Formatting
* Fix spacing
* Fix formatting
* add example of math for pts
* add note about network bandwidth permissions
* Update default net int
* Set default network interfaces to empty
* Don't read interfaces if none are set
* Formatting
* Add stderr output
* Run ffmpeg sub process & video_properties as async
* Run recording cleanup in the main process
* More cleanup
* Use inter process communication to write recordings into the DB
* Formatting
* Non-Jetson changes
Required for later commits:
- Allow base image to be overridden (and don't assume its WORKDIR)
- Ensure python3.9
- Map hwaccel decode presets as strings instead of lists
Not required:
- Fix existing documentation
- Simplify hwaccel scale logic
* Prepare for multi-arch tensorrt build
* Add tensorrt images for Jetson boards
* Add Jetson ffmpeg hwaccel
* Update docs
* Add CODEOWNERS
* CI
* Change default model from yolov7-tiny-416 to yolov7-320
In my experience the tiny models perform markedly worse without being
much faster
* fixup! Update docs
* Add auto configuration for height, width and fps in detect role
* Add auto-configuration for detect width, height, and fps for input roles with detect in the CameraConfig class in config.py
* Refactor code to retrieve video properties from input stream in CameraConfig class and add optional parameter to retrieve video duration in get_video_properties function
* format
* Set default detect dimensions to 1280x720 and update DetectConfig to use the defaults
* Revert "Set default detect dimensions to 1280x720 and update DetectConfig to use the defaults"
This reverts commit a1aed0414d.
* Add default detect dimensions if autoconfiguration failed and log a warning message
* fix warn message spelling on frigate/config.py
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Ensure detect height and width are not None before using them in camera configuration
* docs: initial commit
* rename streamInfo to stream_info
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Apply suggestions from code review
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update docs
* handle case then get_video_properties returns 0x0 dimension
* Set detect resolution based on stream properties if available, else apply default values
* Update FrigateConfig to set default values for stream_info if resolution detection fails
* Update camera detection dimensions based on stream information if available
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Basic functionality
* Threaded motion estimator
* Revert "Threaded motion estimator"
This reverts commit 3171801607.
* Don't detect motion when ptz is moving
* fix motion logic
* fix mypy error
* Add threaded queue for movement for slower ptzs
* Move queues per camera
* Move autotracker start to app.py
* iou value for tracked object
* mqtt callback
* tracked object should be initially motionless
* only draw thicker box if autotracking is enabled
* Init if enabled when initially disabled in config
* Fix init
* Thread names
* Always use motion estimator
* docs
* clarify fov support
* remove size ratio
* use mp event instead of value for ptz status
* update autotrack at half fps
* fix merge conflict
* fix event type for mypy
* clean up
* Clean up
* remove unused code
* merge conflict fix
* docs: update link to object_detectors page
* Update docs/docs/configuration/autotracking.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* clarify wording
* pass actual instances directly
* default return preset
* fix type
* Error message when onvif init fails
* disable autotracking if onvif init fails
* disable autotracking if onvif init fails
* ptz module
* verify required_zones in config
* update util after dev merge
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Initial audio classification model implementation
* fix mypy
* Keep audio labelmap local
* Cleanup
* Start adding config for audio
* Add the detector
* Add audio detection process keypoints
* Build out base config
* Load labelmap correctly
* Fix config bugs
* Start audio process
* Fix startup issues
* Try to cleanup restarting
* Add ffmpeg input args
* Get audio detection working
* Save event to db
* End events if not heard for 30 seconds
* Use not heard config
* Stop ffmpeg when shutting down
* Fixes
* End events correctly
* Use api instead of event queue to save audio events
* Get events working
* Close threads when stop event is sent
* remove unused
* Only start audio process if at least one camera is enabled
* Add const for float
* Cleanup labelmap
* Add audio icon in frontend
* Add ability to toggle audio with mqtt
* Set initial audio value
* Fix audio enabling
* Close logpipe
* Isort
* Formatting
* Fix web tests
* Fix web tests
* Handle cases where args are a string
* Remove log
* Cleanup process close
* Use correct field
* Simplify if statement
* Use var for localhost
* Add audio detectors docs
* Add restream docs to mention audio detection
* Add full config docs
* Fix links to other docs
---------
Co-authored-by: Jason Hunter <hunterjm@gmail.com>
* use a different method for blur and contrast to reduce CPU
* blur with radius instead
* use faster interpolation for motion
* improve contrast based on averages
* increase default threshold to 30
* ensure mask is applied after contrast improvement
* update opencv
* update benchmark script
* configurable ffmpeg timeout
* configurable ffmpeg healthcheck interval
rename timeout to healthcheck_interval
only grab config value once
* configurable ffmpeg retry interval
rename healthcheck_interval to retry_interval
* add retry_interval to docs
- update retry_interval text in config.py
* refactor existing motion detector
* implement and use cnt bgsub
* pass fps to motion detector
* create a simplified motion detector
* lightning detection
* update default motion config
* lint imports
* use estimated boxes for regions
* use improved motion detector
* update test
* use a different strategy for clustering motion and object boxes
* increase alpha during calibration
* simplify object consolidation
* add some reasonable constraints to the estimated box
* adjust cluster boundary to 10%
* refactor
* add disabled debug code
* fix variable scope
* Add option for network bandwidth and only calculate if enabled
* Don't show network bandwidth in system stats page if not enabled
* Formatting
* Hide other rows as well
* Add docs
* Add config options for AMD and Intel GPU stats
* Fix stats access
* Update docs
* Use correct bool syntax
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
---------
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Objects need to be in zones multiple times to be considered present in the zone
* Add a field to configure inertia per zone
* Formatting
* Use correct default method
* Clarify zone presence behavior
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
---------
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Add function to get physical interfaces for bandwidth calculation in get_bandwidth_stats() function
* Add telemetry configuration option for enabled network interfaces, with default values for monitoring bandwidth stats for camera ffmpeg processes, go2rtc, and object detectors. Also add support for FrigateConfig in set_bandwidth_stats function to get bandwidth stats for specified network interfaces
* Add isort and ruff linter
Both linters are pretty common among modern python code bases.
The isort tool provides stable sorting and grouping, as well as pruning
of unused imports.
Ruff is a modern linter, that is very fast due to being written in rust.
It can detect many common issues in a python codebase.
Removes the pylint dev requirement, since ruff replaces it.
* treewide: fix issues detected by ruff
* treewide: fix bare except clauses
* .devcontainer: Set up isort
* treewide: optimize imports
* treewide: apply black
* treewide: make regex patterns raw strings
This is necessary for escape sequences to be properly recognized.
* Add option to sort cameras inside Birdseye
* Make default order to be sorted alphabetically
* Add docs for sorting cameras
* Update index.md for cameras
* Remove irelevant comments