* Add camera group config
* Add saving of camera group selection
* Implement camera groups in config and live view
* Fix warnings
* Add tooltips to camera group items on desktop
* Add camera groups to the filters for events
* Fix tooltips and group selection
* Cleanup
* Implement context menu for batch operations and implement apis
* reduce preview calculations on rerenders
* Add button to mark above items as reviewed
* Use context menu for mark as reviewed
* Cleanup
Customizing only the detector model path results in the warning
"Customizing more than a detector model path is unsupported." because
`{} is not {}` evaluates to True
* Use a rolling average of iou to determine if an object is no longer stationary
* Use different box variation to designate when an object is stationary on debug
* In progress
* Use average of boxes instead of average of iou
* Update frigate/track/norfair_tracker.py
Co-authored-by: Blake Blackshear <blake@frigate.video>
---------
Co-authored-by: Blake Blackshear <blake@frigate.video>
* Use zoom space id in Onvif RelativeMove setup (#9859)
* use zoom space id in onvif relativemove setup
* better handle cases when zooming is disabled
* Fix birdseye camera comparison (#9887)
* Format (#9889)
* use first onvif profile with ptz config
* Use zoom space id in Onvif RelativeMove setup (#9859)
* use zoom space id in onvif relativemove setup
* better handle cases when zooming is disabled
* use first onvif profile with ptz config
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Add ui for events
* Display data for review items
* Use preview thumbnails
* Implement paging
* Show icons for what was detected
* Show progress bar on preview thumbnail
* Hide the overlays on hover and update reviewed status
* Dim previews that have been reviewed
* Show audio icons
* Cleanup preview thumb player
* initial implementation of review timeline
* Use timeout for hover playback
* Break apart mobile and desktop views
* Show icons for sub labels
* autoplay first video on mobile
* Only show the last 24 hours by default
* Rework scrolling to fix nested scrolling
* Final scroll cleanups
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* Add review to database
* Create main manager for review segments
* Upsert and maintain review segments
* Update logic for adding new segments
* Add api
* Support deleting review segments on recording cleanup
* Add field for alert labels
* Formatting
* Logic fixes
* Save 16:9 thumbnail for review segment
* Ensure that crop is 16:9
* Fix non detected objects being added
* Only include true positives
* Add sub labels to data
* Add config pub / sub pattern
* remove recording from feature metrics
* remove audio and feature metrics
* Check for updates from all cameras
* remove birdseye from camera metrics
* remove motion and detection camera metrics
* Ensure that all processes are stopped
* Stop communicators
* Detections
* Cleanup video output queue
* Use select for time sensitive polls
* Use ipc instead of tcp
Adds a direct dependency on markupsafe, instead of relying on the
implicit dependency via Flask.
This is in preparation of Flask 3.0 support, which will drop compat for
importing escape indirectly.
* Use zmq for inter process communication
* Use localhost for reply and request
* Use pyobj instead of json and Need to use separate requestors for each audio listener
* Cleanup port defining
* Don't show gif until event is over and fix aspect
* Be more efficient about updating events
* ensure previews are sorted
* Don't show live view when window is not visible
* Move debug camera to separate view
* Improve jpg loading
* Ensure still is updated on player live finish
* Don't reload when window not visible
* Only disconnect instead of full remove
* Use start time instead of event over to determine gif
* ROCm AMD/GPU based build and detector, WIP
* detectors/rocm: separate yolov8 postprocessing into own function; fix box scaling; use cv2.dnn.blobForImage for preprocessing; assert on required model parameters
* AMD/ROCm: add couple of more ultralytics models; comments
* docker/rocm: make imported model files readable by all
* docker/rocm: readme about running on AMD GPUs
* docker/rocm: updated README
* docker/rocm: updated README
* docker/rocm: updated README
* detectors/rocm: separated preprocessing functions into yolo_utils.py
* detector/plugins: added onnx cpu plugin
* docker/rocm: updated container with limite label sets
* example detectors view
* docker/rocm: updated README.md
* docker/rocm: update README.md
* docker/rocm: do not set HSA_OVERRIDE_GFX_VERSION at all for the general version as the empty value broke rocm
* detectors: simplified/optimized yolov8_postprocess
* detector/yolo_utils: indentation, remove unused variable
* detectors/rocm: default option to conserve cpu usage at the expense of latency
* detectors/yolo_utils: use nms to prefilter overlapping boxes if too many detected
* detectors/edgetpu_tfl: add support for yolov8
* util/download_models: script to download yolov8 model files
* docker/main: add download-models overlay into s6 startup
* detectors/rocm: assume models are in /config/model_cache/yolov8/
* docker/rocm: compile onnx files into mxr files at startup
* switch model download into bash script
* detectors/rocm: automatically override HSA_OVERRIDE_GFX_VERSION for couple of known chipsets
* docs: rocm detector first notes
* typos
* describe builds (harakas temporary)
* docker/rocm: also build a version for gfx1100
* docker/rocm: use cp instead of tar
* docker.rocm: remove README as it is now in detector config
* frigate/detectors: renamed yolov8_preprocess->preprocess, pass input tensor element type
* docker/main: use newer openvino (2023.3.0)
* detectors: implement class aggregation
* update yolov8 model
* add openvino/yolov8 support for label aggregation
* docker: remove pointless s6/timeout-up files
* Revert "detectors: implement class aggregation"
This reverts commit dcfe6bbf6f.
* detectors/openvino: remove class aggregation
* detectors: increase yolov8 postprocessing score trershold to 0.5
* docker/rocm: separate rocm distributed files into its own build stage
* Update object_detectors.md
* updated CODEOWNERS file for rocm
* updated build names for documentation
* Revert "docker/main: use newer openvino (2023.3.0)"
This reverts commit dee95de908.
* reverrted openvino detector
* reverted edgetpu detector
* scratched rocm docs from any mention of edgetpu or openvino
* Update docs/docs/configuration/object_detectors.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* renamed frigate.detectors.yolo_utils.py -> frigate.detectors.util.py
* clarified rocm example performance
* Improved wording and clarified text
* Mentioned rocm detector for AMD GPUs
* applied ruff formating
* applied ruff suggested fixes
* docker/rocm: fix missing argument resulting in larger docker image sizes
* docs/configuration/object_detectors: fix links to yolov8 release files
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Break out live page
* Improving layouts and add chip component
* Improve default camera player sizing
* Improve live updating
* Cleanup and fit figma
* Use fixed height
* Masonry layout
* Fix stuff
* Don't force heights
* Adjust scaling
* Cleanup
* remove sidebar (#9731)
* remove sidebar
* keep sidebar on mobile for now and add icons
* Fix revalidation
* Cleanup
* Cleanup width
* Add chips for activity on cameras
* Remove dashboard from header
* Use Inter font (#9735)
* Show still image when no activity is occurring
* remove unused search params
* add playing check for webrtc
* Don't use grid at all for single column
* Fix height on mobile
* a few style updates to better match figma (#9745)
* Remove active objects when they become stationary
* Move to sidebar only and make settings separate component
* Fix layout
* Animate visibility of chips
* Sidebar is full screen
* Fix tall aspect ratio cameras
* Fix complicated aspect logic
* remove
* Adjust thumbnail aspect and add text
* margin on single column layout
* Smaller event thumb text
* Simplify basic image view
* Only show the red dot when camera is recording
* Improve typing for camera toggles
* animate chips with react-transition-group (#9763)
* don't flash when going to still image
* revalidate
* tooltips and active tracking outline (#9766)
* tooltips
* fix tooltip provider and add active tracking outline
* remove unused icon
* remove figma comment
* Get live mode working for jsmpeg
* add small gradient below timeago on event thumbnails (#9767)
* Create live mode hook and make sure jsmpeg can be used
* Enforce env var
* Use print
* Remove unstable
* Add tooltips to thumbnails
* Put back vite
* Format
* Update web/src/components/player/JSMpegPlayer.tsx
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
Co-authored-by: Blake Blackshear <blake@frigate.video>
* Ensure viewport is always full screen
* Protect against hour with no cards and ensure data is consistent
* Reduce grouped up image refreshes
* Include current hour and fix scrubbing bugginess
* Scroll initially selected timeline in to view
* Expand timelne class type
* Use poster image for preview on video player instead of using separate image view
* Fix available streaming modes
* Incrase timing for grouping timline items
* Fix audio activity listener
* Fix player not switching views correctly
* Use player time to convert to timeline time
* Update sub labels for previous timeline items
* Show mini timeline bar for non selected items
* Rewrite desktop timeline to use separate dynamic video player component
* Extend improvements to mobile as well
* Improve time formatting
* Fix scroll
* Fix no preview case
* Mobile fixes
* Audio toggle fixes
* More fixes for mobile
* Improve scaling of graph motion activity
* Add keyboard shortcut hook and support shortcuts for playback page
* Fix sizing of dialog
* Improve height scaling of dialog
* simplify and fix layout system for timeline
* Fix timeilne items not working
* Implement basic Frigate+ submitting from timeline
* Add timeline graph component
* Add more custom colors and improve graph
* Add api and data
* Fix data sorting
* Add graph to timeline
* Only show timeline for selected hour
* Make data full range
* Add filter popover
* Add api filter hook and use UI with filtering
* Get history filtering working for cameras and labels
* Allow filtering on detail level
* Save timeline entries for api events
* reset
* fix width
* Add support for review grid
* Cleanup reloading on focus
* Adjust timeline api to include metadata and before
* Be more efficient about getting info
* Adjust to new data format
* Cleanup types
* Cleanup text
* Transition to history
* Cleanup
* remove old web implementations
* Cleanup
* Generate low res low fps previews for recordings viewer
* Make sure previews end on the hour
* Fix durations and decrase keyframe interval to ensure smooth scrubbing
* Ensure minimized resolution is compatible with yuv
* Add ability to configure preview quality
* Fix
* Clean up previews more efficiently
* Use iterator
* support for other yolov models and config checks
* apply code formatting
* Information about core mask and inference speed
* update rknn postprocess and remove params
* update model selection
* Apply suggestions from code review
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* support rknn on all socs
* apply changes from review and fix post process bug
* apply code formatting
* update tip in object_detectors docs
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* support for other yolov models and config checks
* apply code formatting
* Information about core mask and inference speed
* update rknn postprocess and remove params
* update model selection
* Apply suggestions from code review
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Add endpoint to restart Frigate
The only means of restarting Frigate remotely is to issue
a restart topic on the server's websocket. It's
convenient to also expose this capability via HTTP endpoint.
* Add new section to API docs
* Remove extra line
* prevent estimate clipping when autotracking
* use unclipped estimate in distance function only
* remove autotracking velocity changes
* publish on init
* Initial draft for filtering Frigate+ submits in frontend
* Hide filter when Frigate+ is not enabled
* Update http.py
* Revert "Update http.py"
This reverts commit fa292682d6.
* optimize motion and velocity estimation
* change recommended fps and fix config validate
* remove unneeded var
* process at most 3 objects per second
* fix test
* add `--validate-config` option for CI config validation
Signed-off-by: Russell Troxel <russell.troxel@segment.com>
* Fix Lint
Signed-off-by: Russell Troxel <russell.troxel@segment.com>
* Add docs & test live
Signed-off-by: Russell Troxel <russell.troxel@segment.com>
* Update docs/docs/configuration/advanced.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Fix Lint
Signed-off-by: Russell Troxel <russell@troxel.io>
---------
Signed-off-by: Russell Troxel <russell.troxel@segment.com>
Signed-off-by: Russell Troxel <russell@troxel.io>
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* zoom in/out in search for lost objects
* predicted box should not be empty
* clean up and update zoom logic
* only zoom if enabled
* more cleanup
* check for valid velocity when zooming
* only try absolute zoom in if obj area has changed
* zoom logic
* don't enqueue lost object zoom if already at limit
* don't disable motion boxes during ptz moves
* velocity threshold based on move coefficients
* fix area zoom logic
* disable debug zoom
* don't process objects if ptz moving
* recalc with exponent
* change exponent
* remove lost object zooming
* increase distance threshold for stationary object
* increase distance threshold constant
* only zoom out if nonzero
* camera name in all debug logging
* add camera name to debug logging
* camera variable name consistency
* update calibration behavior and docs
* docs and better zooming
* more sensible target values
* docs wording
* fix velocity threshold variable
* zooming tweaks and remove iou for current objects
* debug and docs
* get valid velocity
* include zero
* additional debug statements
* add zoom hysteresis
* zoom on initial move if relative
* only update target box if we actually zoom
* merge dev
* use getattr instead of get
* increase distance threshold
* reverse logic
* get_camera_status after preset move to store zoom
* final tweaks and docs
* use constants and catch possible debug exception
* adjust zoom factor exponent
* don't run motion estimation when calling preset
* adjust dimension threshold
* use numpy for velocity estimate calcs
* more numpy conversion
* fix numpy shapes
* numpy zeros dimension
* more zoom out conditions
* fix velocity bug
* ensure init has been called in debug view
* ensure onvif init if enabling by mqtt
* change default hysteresis values
* recalc relative zoom value
* zoom out value
* try to zoom when object isn't moving
* try zoom when tracked object is not moving
* don't try to zoom every time
* negate zoom out condition when needed
* hysteresis constants for absolute zooming
* update zoom conditions
* don't recalc target box on zoom only
* zoom out if above area threshold
* don't print zooming debug for stationary obj
* revamp zooming to use area moving average
* zooming tweaks and expose property
* limit zoom with max target box
* use calibration to determine zoom levels
* zoom logic fix
* docs
* add tapo c200 camera
* fix initial absolute zoom
* small zoom logic fix
* better invalid velocity checks
* fix test
* really fix test this time
* [CHANGE] More resilient and slightly faster PTZ
* Make "Check Black" happy.
* Make "check black" happier
* Remove unused named exception
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Smarter Regions
* Formatting
* Cleanup
* Fix motion region checking logic
* Add database table and migration for regions
* Update region grid on startup
* Revert init delay change
* Fix mypy
* Move object related functions to util
* Remove unused
* Fix tests
* Remove log
* Update the region daily at 2
* Fix logic
* Formatting
* Initialize grid before starting processing frames
* Move back to creating grid in main process
* Formatting
* Fixes
* Formating
* Fix region check
* Accept all but true
* Use regions grid for startup scan
* Add clarifying comment
* Fix new grid requests
* Add tests
* Delete stale region grids from DB
* fix queues not emptying fully by changing gets to a blocking call with short timeout
* add extra error/warning messages when there's a possibility of missing recording segments
* Update api.md
* Update api.md
* Added filter option for min/max score for event to API function /events
* Added filter for submitted events
* Update http.py
* Add attribute item to timeline
* Add face icon
* Add support for other icons
* Cleanup
* Ensure attributes are only updated once
* don't show _ in attributes
* don't zoom if camera doesn't support it
* basic zooming
* make zooming configurable
* zooming docs
* optional zooming in camera status
* Use absolute instead of relative zooming
* increase edge threshold
* zoom considering object area
* bugfixes
* catch onvif zooming errors
* relative zooming option for dahua/amcrest cams
* docs
* docs
* don't make small movements
* remove old logger statement
* fix small movements
* use enum in config for zooming
* fix formatting
* empty move queue first
* clear tracked object before waiting for stop
* use velocity estimation for movements
* docs updates
* add tests
* typos
* recalc every 50 moves
* adjust zoom based on estimate box if calibrated
* tweaks for fast objects and large movements
* use real time for calibration and add info logging
* docs updates
* remove area scale
* Add example video to docs
* zooming font header size the same as the others
* log an error if a ptz doesn't report a MoveStatus
* debug logging for onvif service capabilities
* ensure camera supports ONVIF MoveStatus
* Add args to ignore audio and only process keyframes
* Add timelapse args to config
* Update docs
* Formatting
* Fix spacing
* Fix formatting
* add example of math for pts
* add note about network bandwidth permissions
* Update default net int
* Set default network interfaces to empty
* Don't read interfaces if none are set
* Formatting
* Add stderr output
* Reduce database queries to necessary labels
* Set columns for other queries
* skip creating model instances
---------
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Run ffmpeg sub process & video_properties as async
* Run recording cleanup in the main process
* More cleanup
* Use inter process communication to write recordings into the DB
* Formatting
* Non-Jetson changes
Required for later commits:
- Allow base image to be overridden (and don't assume its WORKDIR)
- Ensure python3.9
- Map hwaccel decode presets as strings instead of lists
Not required:
- Fix existing documentation
- Simplify hwaccel scale logic
* Prepare for multi-arch tensorrt build
* Add tensorrt images for Jetson boards
* Add Jetson ffmpeg hwaccel
* Update docs
* Add CODEOWNERS
* CI
* Change default model from yolov7-tiny-416 to yolov7-320
In my experience the tiny models perform markedly worse without being
much faster
* fixup! Update docs
* Store camera labels in dict and other optimizations
* Add max on timeout so it is at least 60
* Ensure db timeout is at least 60
* Update list once a day to ensure new labels are cleaned up
* Formatting
* Insert recordings as bulk instead of individually.
* Fix
* Refactor event and timeline cleanup
* Remove unused
* Send mqtt message when audio is detected
* Fix value
* Add audio topics to mqtt docs and add mqtt headers
* Use existing standard for values
* Update mqtt.md
* Add auto configuration for height, width and fps in detect role
* Add auto-configuration for detect width, height, and fps for input roles with detect in the CameraConfig class in config.py
* Refactor code to retrieve video properties from input stream in CameraConfig class and add optional parameter to retrieve video duration in get_video_properties function
* format
* Set default detect dimensions to 1280x720 and update DetectConfig to use the defaults
* Revert "Set default detect dimensions to 1280x720 and update DetectConfig to use the defaults"
This reverts commit a1aed0414d.
* Add default detect dimensions if autoconfiguration failed and log a warning message
* fix warn message spelling on frigate/config.py
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Ensure detect height and width are not None before using them in camera configuration
* docs: initial commit
* rename streamInfo to stream_info
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Apply suggestions from code review
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update docs
* handle case then get_video_properties returns 0x0 dimension
* Set detect resolution based on stream properties if available, else apply default values
* Update FrigateConfig to set default values for stream_info if resolution detection fails
* Update camera detection dimensions based on stream information if available
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Add limit to event query for fetching latest event with specified label and camera name
* Refactor the label_thumbnail function in frigate/http.py to simplify the event_query logic and improve code readability
* Support setting sub label scores via API
* Update docs
* Update docs
* Formatting
* Throw error when score is outside expected bounds
* Fix / cleanup
* much improved motion estimation and tracking
* docs updates
* move ptz specific mp values to ptz_metrics dict
* only check if moving at frame time
* pass full dict instead of individual values
* only check camera status if autotracking enabled
* a more sensible sleep time
* Only check camera status if preparing for a move
* only update tracked obj position when ptz stopped
* both pantilt *and* zoom should be idle
* check more often after moving
* No need to move pan and tilt separately
* Basic functionality
* Threaded motion estimator
* Revert "Threaded motion estimator"
This reverts commit 3171801607.
* Don't detect motion when ptz is moving
* fix motion logic
* fix mypy error
* Add threaded queue for movement for slower ptzs
* Move queues per camera
* Move autotracker start to app.py
* iou value for tracked object
* mqtt callback
* tracked object should be initially motionless
* only draw thicker box if autotracking is enabled
* Init if enabled when initially disabled in config
* Fix init
* Thread names
* Always use motion estimator
* docs
* clarify fov support
* remove size ratio
* use mp event instead of value for ptz status
* update autotrack at half fps
* fix merge conflict
* fix event type for mypy
* clean up
* Clean up
* remove unused code
* merge conflict fix
* docs: update link to object_detectors page
* Update docs/docs/configuration/autotracking.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* clarify wording
* pass actual instances directly
* default return preset
* fix type
* Error message when onvif init fails
* disable autotracking if onvif init fails
* disable autotracking if onvif init fails
* ptz module
* verify required_zones in config
* update util after dev merge
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update to latest tensorrt (8.6.1) release
* Build trt libyolo_layer.so in container
* Update tensorrt_models script to convert models from the frigate container
* Fix typo in model script
* Fix paths to yolo lib and models folder
* Add S6 scripts to test and convert specified TensortRT models at startup.
Rearrange tensorrt files into a docker support folder.
* Update TensorRT documentation to reflect the new model conversion process and minimum HW support.
* Fix model_cache path to live in config directory
* Move tensorrt s6 files to the correct directory
* Fix issues in model generation script
* Disable global timeout for s6 services
* Add version folder to tensorrt model_cache path
* Include TensorRT version 8.5.3
* Add numpy requirement prior to removal of np.bool
* This TRT version uses a mixture of cuda dependencies
* Redirect stdout from noisy model conversion
* Add functionality to update YAML config file with PUT request in HTTP endpoint
* Refactor copying of text to clipboard with Clipboard API and fallback to document.execCommand('copy') in CameraMap.jsx file
* Update YAML file from URL query parameters in frigate/http.py and add functionality to save motion masks, zones, and object masks in CameraMap.jsx
* formatting
* fix merging fuckup
* Refactor camera zone coordinate saving to use single query parameter per zone in CameraMap.jsx
* remove unnecessary print statements in util.py
* Refactor update_yaml_file function to separate the logic for updating YAML data into a new function update_yaml().
* fix merge errors
* Refactor code to improve error handling and add dependencies to useEffect hooks
* reduce contention on frame_queue
don't check if the queue is full, just attempt to add the frame
in a non-blocking manner, and then if it fails, skip it
* don't check if the frame queue is empty, just try and get from it
* Update frigate/video.py
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
---------
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Refactored queues to use faster_fifo instead of mp.Queue
* Refactored LimitedQueue to include a counter for the number of items in the queue and updated put and get methods to use the counter
* Refactor app.py and util.py to use a custom Queue implementation called LQueue instead of the existing Queue
* Refactor put and get methods in LimitedQueue to handle queue size and blocking behavior more efficiently
* code format
* remove code from other branch (merging fuckup)
* Check ffmpeg version instead of checking for presence of BTBN_PATH
* Query ffmpeg version in s6 run script instead of subprocessing in every import
* Define LIBAVFORMAT_VERSION_MAJOR in devcontainer too
* Formatting
* Default ffmpeg version to current btbn version so unit tests pass
* Reduce framerate before downscaling
It is cheaper to drop frames and downscale those that remain than it is
to downscale all frames and then drop some of them. This is achieved
with the filter chain `-cv fps=FPS,scale=W:H`, and perhaps was the
original intention. The plain `-r` and `-s` flags do not execute in
order though - they each put themselves at the *end* of the filterchain,
so `-r FPS -s WxH` actually applies the scale filter first, and then the
rate filter.
This fix can halve the CPU used by the detect ffmpeg process.
* Bring back hard rate limits
* Force birdseye cameras into standard aspect ratios
* Clarify comment
* Formatting
* Actually use the calculated aspect ratio when building the layout
* Fix Y aspect
* Force canvas into known aspect ratio as well
* Save canvas size and don't recalculate
* Cache coefficients that are used for different size layouts
* Further optimize calculations to not be done multiple times
* Add audio process PID to the list of processes and log the start of the audio process
* Update audio process PID key in processes dictionary to "audioDetector" instead of "audio".
* Scale layout up to max size after it has been calculated
* Limit portrait cameras to taking up 2 rows
* Fix bug
* Fix birdsye not removing cameras once objects are no longer visible
* Fix lint
* Initial audio classification model implementation
* fix mypy
* Keep audio labelmap local
* Cleanup
* Start adding config for audio
* Add the detector
* Add audio detection process keypoints
* Build out base config
* Load labelmap correctly
* Fix config bugs
* Start audio process
* Fix startup issues
* Try to cleanup restarting
* Add ffmpeg input args
* Get audio detection working
* Save event to db
* End events if not heard for 30 seconds
* Use not heard config
* Stop ffmpeg when shutting down
* Fixes
* End events correctly
* Use api instead of event queue to save audio events
* Get events working
* Close threads when stop event is sent
* remove unused
* Only start audio process if at least one camera is enabled
* Add const for float
* Cleanup labelmap
* Add audio icon in frontend
* Add ability to toggle audio with mqtt
* Set initial audio value
* Fix audio enabling
* Close logpipe
* Isort
* Formatting
* Fix web tests
* Fix web tests
* Handle cases where args are a string
* Remove log
* Cleanup process close
* Use correct field
* Simplify if statement
* Use var for localhost
* Add audio detectors docs
* Add restream docs to mention audio detection
* Add full config docs
* Fix links to other docs
---------
Co-authored-by: Jason Hunter <hunterjm@gmail.com>
* use a different method for blur and contrast to reduce CPU
* blur with radius instead
* use faster interpolation for motion
* improve contrast based on averages
* increase default threshold to 30
* ensure mask is applied after contrast improvement
* update opencv
* update benchmark script
* configurable ffmpeg timeout
* configurable ffmpeg healthcheck interval
rename timeout to healthcheck_interval
only grab config value once
* configurable ffmpeg retry interval
rename healthcheck_interval to retry_interval
* add retry_interval to docs
- update retry_interval text in config.py
* optimize frame-per-second calculations
FPS is never calculated over differing periods for even given
counter. By only storing timestamps within the period that is
used, we avoid having to iterate 1000 entries every time it's
re-calculated (multiple times per second). 1000 entries would
normally only be needed if something is running at 100fps. A
more common speed - anywhere from 5 to 30fps, only needs 50
to 300 entries.
This isn't a huge improvement in the grand scheme, but when motion
detection is disabled, it takes nearly as much time in a flamegraph
as actually transferring the video frame.
* fix python checks
* Load labels dynamically to include custom events and audio, do not include attribute labels
* Formatting
* Fix sorting
* Also filter tracked object list on camera page
* isort
* Don't fail before load