* Adding Models
* Final Async Update
* Bug Fixing
* Fix
* Adding fixes
* Working async infer
* Final Documenatation and debug update
* Removing some extra prints
* Post-process correct label push
* config docs fix
* Review Fix
* Review fix 2.0
* Fixing the ASYNC API to work from 30ms to 10ms
* Fix for multi stream async infernce
* Format
* Fix#3
* Format#2
* Remove Unnessery includes
* Sort Imports
* Fix#16845
Maybe after PR #16712 , ffmpeg build with JP6 seem broken with error `/usr/lib/ffmpeg/jetson/bin/ffmpeg: error while loading shared libraries: libavdevice.so.60: cannot open shared object file: No such file or directory`
This PR fixes the issue
* Adding new LD entry for ffmpeg new location
* Update Dockerfile.arm64
* Move LD config to Dockerfile arm64 instead of detector
* db migration
* db model
* assign admin role on password reset
* add role to jwt and api responses
* don't restrict api access for admins yet
* use json response
* frontend auth context
* update auth form for profile endpoint
* add access denied page
* add protected routes
* auth hook
* dialogs
* user settings view
* restrict viewer access to settings
* restrict camera functions for viewer role
* add password dialog to account menu
* spacing tweak
* migrator default to admin
* escape quotes in migrator
* ui tweaks
* tweaks
* colors
* colors
* fix merge conflict
* fix icons
* add api layer enforcement
* ui tweaks
* fix error message
* debug
* clean up
* remove print
* guard apis for admin only
* fix tests
* fix review tests
* use correct error responses from api in toasts
* add role to account menu
* Update vite
* Update LuIcons
* Update radix packages
* Fix other icons
* Use correct node version
* Remove superfluous web build on python tests
* Move web build to test
* Monitor if camera is disabled for review items
* Simplify multi camera disabled check
* Cleanup birdseye config handling
* Cleanup
* Remove old listeners
* Fix live cameras not showing on refresh
* Fix live dashboard when birdseye is added
* Handle cameras that are offline / disabled
* Use black instead of green frame
* Fix missing mqtt topics
* Adapt openai.py to work with xAI
It appears xAI is a bit more strict in regards to how the prompt is sent. This changes the prompt to be a dictionary with `"type": "text"` which works with OpenAI and xAI.
* Adapt openai.py to work with xAI
add "detail": "low"
* Adapt openai.py to work with xAI
Apply Ruff formatting and linting fixes
* config options
* metrics
* stop and restart ffmpeg processes
* dispatcher
* frontend websocket
* buttons for testing
* don't recreate log pipe
* add/remove cam from birdseye when enabling/disabling
* end all objects and send empty camera activity
* enable/disable switch in ui
* disable buttons when camera is disabled
* use enabled_in_config for some frontend checks
* tweaks
* handle settings pane with disabled cameras
* frontend tweaks
* change to debug log
* mqtt docs
* tweak
* ensure all ffmpeg processes are initially started
* clean up
* use zmq
* remove camera metrics
* remove camera metrics
* tweaks
* frontend tweaks
* Simplify rocm install and update to 6.3.1
* Build out more necessary packages
* Update to 6.3.3
* Set bake version
* Fix typo
* Ensure NHWC is used
* Reset dev changes
* Write to cache
* Update getting_started with full host:container syntax for hwacc
* Update edgetpu.md
Add a tip about the coral TPU not changing identification until after Frigate runs an inference on the TPU.