* semantic trigger test
* database and model
* config
* embeddings maintainer and trigger post-processor
* api to create, edit, delete triggers
* frontend and i18n keys
* use thumbnail and description for trigger types
* image picker tweaks
* initial sync
* thumbnail file management
* clean up logs and use saved thumbnail on frontend
* publish mqtt messages
* webpush changes to enable trigger notifications
* add enabled switch
* add triggers from explore
* renaming and deletion fixes
* fix typing
* UI updates and add last triggering event time and link
* log exception instead of return in endpoint
* highlight entry in UI when triggered
* save and delete thumbnails directly
* remove alert action for now and add descriptions
* tweaks
* clean up
* fix types
* docs
* docs tweaks
* docs
* reuse enum
* Move log level initialization to log
* Use logger config
* Formatting
* Fix config order
* Set process names
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Set runtime
* Use count correctly
* Don't assume camera sizes
* Use separate zmq proxy for object detection
* Correct order
* Use forkserver
* Only store PID instead of entire process reference
* Cleanup
* Catch correct errors
* Fix typing
* Remove before_run from process util
The before_run never actually ran because:
You're right to suspect an issue with before_run not being called and a potential deadlock. The way you've implemented the run_wrapper using __getattribute__ for the run method of BaseProcess is a common pitfall in Python's multiprocessing, especially when combined with how multiprocessing.Process works internally.
Here's a breakdown of why before_run isn't being called and why you might be experiencing a deadlock:
The Problem: __getattribute__ and Process Serialization
When you create a multiprocessing.Process object and call start(), the multiprocessing module needs to serialize the process object (or at least enough of it to re-create the process in the new interpreter). It then pickles this serialized object and sends it to the newly spawned process.
The issue with your __getattribute__ implementation for run is that:
run is retrieved during serialization: When multiprocessing tries to pickle your Process object to send to the new process, it will likely access the run attribute. This triggers your __getattribute__ wrapper, which then tries to bind run_wrapper to self.
run_wrapper is bound to the parent process's self: The run_wrapper closure, when created in the parent process, captures the self (the Process instance) from the parent's memory space.
Deserialization creates a new object: In the child process, a new Process object is created by deserializing the pickled data. However, the run_wrapper method that was pickled still holds a reference to the self from the parent process. This is a subtle but critical distinction.
The child's run is not your wrapped run: When the child process starts, it internally calls its own run method. Because of the serialization and deserialization process, the run method that's ultimately executed in the child process is the original multiprocessing.Process.run or the Process.run if you had directly overridden it. Your __getattribute__ magic, which wraps run, isn't correctly applied to the Process object within the child's context.
* Cleanup
* Logging bugfix (#18465)
* use mp Manager to handle logging queues
A Python bug (https://github.com/python/cpython/issues/91555) was preventing logs from the embeddings maintainer process from printing. The bug is fixed in Python 3.14, but a viable workaround is to use the multiprocessing Manager, which better manages mp queues and causes the logging to work correctly.
* consolidate
* fix typing
* Fix typing
* Use global log queue
* Move to using process for logging
* Convert camera tracking to process
* Add more processes
* Finalize process
* Cleanup
* Cleanup typing
* Formatting
* Remove daemon
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* Add base class for global config updates
* Add or remove camera states
* Move camera process management to separate thread
* Move camera management fully to separate class
* Cleanup
* Stop camera processes when stop command is sent
* Start processes dynamically when needed
* Adjust
* Leave extra room in tracked object queue for two cameras
* Dynamically set extra config pieces
* Add some TODOs
* Fix type check
* Simplify config updates
* Improve typing
* Correctly handle indexed entries
* Cleanup
* Create out SHM
* Use ZMQ for signaling object detectoin is completed
* Get camera correctly created
* Cleanup for updating the cameras config
* Cleanup
* Don't enable audio if no cameras have audio transcription
* Use exact string so similar camera names don't interfere
* Add ability to update config via json body to config/set endpoint
Additionally, update the config in a single rather than multiple calls for each updated key
* fix autotracking calibration to support new config updater function
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* install new packages for transcription support
* add config options
* audio maintainer modifications to support transcription
* pass main config to audio process
* embeddings support
* api and transcription post processor
* embeddings maintainer support for post processor
* live audio transcription with sherpa and faster-whisper
* update dispatcher with live transcription topic
* frontend websocket
* frontend live transcription
* frontend changes for speech events
* i18n changes
* docs
* mqtt docs
* fix linter
* use float16 and small model on gpu for real-time
* fix return value and use requestor to embed description instead of passing embeddings
* run real-time transcription in its own thread
* tweaks
* publish live transcriptions on their own topic instead of tracked_object_update
* config validator and docs
* clarify docs
* Include config publisher in api
* Call update topic for passed topics
* Update zones dynamically
* Update zones internally
* Support zone and mask reset
* Handle updating objects config
* Don't put status for needing to restart Frigate
* Cleanup http tests
* Fix tests
* Don't use timezone in export dialog timestamps
Revert an unnecessary change made in https://github.com/blakeblackshear/frigate/pull/18257
* Ensure notifications register button is only disabled when both all cameras and every individual camera is disabled
* Send test notification if any cameras are enabled
* clarify docs about disabling cameras
* fix crash in autotracking zoom
* clean up
* masks and zones i18n fixes
* Check if camera is enabled in config
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* use async/await instead of asyncio.run()
* fix autotracking
* create cameras in same event loop that will use them
* more debug
* try using existing event loop instead of creating a new one
* merge dev
* fixes
* run get_camera_info onvifcontroller calls in dedicated loop
* move coroutine call with loop to api
* use asyncio for autotracking move queues
* clean up
* fix calibration
* improve exception logging
* db migration
* db model
* assign admin role on password reset
* add role to jwt and api responses
* don't restrict api access for admins yet
* use json response
* frontend auth context
* update auth form for profile endpoint
* add access denied page
* add protected routes
* auth hook
* dialogs
* user settings view
* restrict viewer access to settings
* restrict camera functions for viewer role
* add password dialog to account menu
* spacing tweak
* migrator default to admin
* escape quotes in migrator
* ui tweaks
* tweaks
* colors
* colors
* fix merge conflict
* fix icons
* add api layer enforcement
* ui tweaks
* fix error message
* debug
* clean up
* remove print
* guard apis for admin only
* fix tests
* fix review tests
* use correct error responses from api in toasts
* add role to account menu
* Remove thumbnail from dict
* Create thumbnail diectory
* Cleanup handling of tracked object images
* Make thumbnail optional
* Handle cases where thumbnail is used
* Expand options for thumbnail api
* Fix up the does not exist condition
* Remove absolute usages of thumbnails
* Write thumbnails for external events
* Reduce webp quality
* Use webp everywhere in frontend
* Formatting
* Always consider all events when re-indexing
* Add thumbnail deletion and cleanup path management
* Cleanup imports
* Rename def
* Don't save thumbnail for every object
* Correct event count
* Use correct function
* Include thumbnail in query
* Remove unused
* Fix requiring exception
* backend
* frontend
* add notification config at camera level
* camera level notifications in dispatcher
* initial onconnect
* frontend
* backend for suspended notifications
* frontend
* use base communicator
* initialize all cameras in suspended array and use 0 for unsuspended
* remove switch and use select for suspending in frontend
* use timestamp instead of datetime
* frontend tweaks
* mqtt docs
* fix button width
* use grid for layout
* use thread and queue for processing notifications with 10s timeout
* clean up
* move async code to main class
* tweaks
* docs
* remove warning message
* Actually send result to face registration
* Define postprocessing api and move face processing to fit
* Standardize request handling
* Standardize handling of processors
* Rename processing metrics
* Cleanup
* Standardize object end
* Update to newer formatting
* One more
* One more
* Get stats for embeddings inferences
* cleanup embeddings inferences
* Enable UI for feature metrics
* Change threshold
* Fix check
* Update python for actions
* Set python version
* Ignore type for now
* Support downloading face models
* Handle download and loading correctly
* Add face dir creation
* Fix error
* Fix
* Formatting
* Move upload to button
* Show number of faces in library for each name
* Add text color for score
* Cleanup
* Add service manager infrastructure
The changes are (This will be a bit long):
- A ServiceManager class that spawns a background thread and deals with
service lifecycle management. The idea is that service lifecycle code
will run in async functions, so a single thread is enough to manage
any (reasonable) amount of services.
- A Service class, that offers start(), stop() and restart() methods
that simply notify the service manager to... well. Start, stop or
restart a service.
(!) Warning: Note that this differs from mp.Process.start/stop in that
the service commands are sent asynchronously and will complete
"eventually". This is good because it means that business logic is
fast when booting up and shutting down, but we need to make sure
that code does not rely on start() and stop() being instant
(Mainly pid assignments).
Subclasses of the Service class should use the on_start and on_stop
methods to monitor for service events. These will be run by the
service manager thread, so we need to be careful not to block
execution here. Standard async stuff.
(!) Note on service names: Service names should be unique within a
ServiceManager. Make sure that you pass the name you want to
super().__init__(name="...") if you plan to spawn multiple instances
of a service.
- A ServiceProcess class: A Service that wraps a multiprocessing.Process
into a Service. It offers a run() method subclasses can override and
can support in-place restarting using the service manager.
And finally, I lied a bit about this whole thing using a single thread.
I can't find any way to run python multiprocessing in async, so there is
a MultiprocessingWaiter thread that waits for multiprocessing events and
notifies any pending futures. This was uhhh... fun? No, not really.
But it works. Using this part of the code just involves calling the
provided wait method. See the implementation of ServiceProcess for more
details.
Mirror util.Process hooks onto service process
Remove Service.__name attribute
Do not serialize process object on ServiceProcess start.
asd
* Update frigate dictionary
* Convert AudioProcessor to service process
* swap sqlite_vec for chroma in requirements
* load sqlite_vec in embeddings manager
* remove chroma and revamp Embeddings class for sqlite_vec
* manual minilm onnx inference
* remove chroma in clip model
* migrate api from chroma to sqlite_vec
* migrate event cleanup from chroma to sqlite_vec
* migrate embedding maintainer from chroma to sqlite_vec
* genai description for sqlite_vec
* load sqlite_vec in main thread db
* extend the SqliteQueueDatabase class and use peewee db.execute_sql
* search with Event type for similarity
* fix similarity search
* install and add comment about transformers
* fix normalization
* add id filter
* clean up
* clean up
* fully remove chroma and add transformers env var
* readd uvicorn for fastapi
* readd tokenizer parallelism env var
* remove chroma from docs
* remove chroma from UI
* try removing custom pysqlite3 build
* hard code limit
* optimize queries
* revert explore query
* fix query
* keep building pysqlite3
* single pass fetch and process
* remove unnecessary re-embed
* update deps
* move SqliteVecQueueDatabase to db directory
* make search thumbnail take up full size of results box
* improve typing
* improve model downloading and add status screen
* daemon downloading thread
* catch case when semantic search is disabled
* fix typing
* build sqlite_vec from source
* resolve conflict
* file permissions
* try build deps
* remove sources
* sources
* fix thread start
* include git in build
* reorder embeddings after detectors are started
* build with sqlite amalgamation
* non-platform specific
* use wget instead of curl
* remove unzip -d
* remove sqlite_vec from requirements and load the compiled version
* fix build
* avoid race in db connection
* add scale_factor and bias to description zscore normalization
I just saw this, and I would be very surprised by that behaviour as a
user. Changing the db path would randomly move the database, and
changing it back (or to anything, really) would not. These kinds of
advanced settings are generally expected to do one thing: Change the
path frigate opens the database from. The end.
* Subclass Process for audio_process
* Introduce custom mp.Process subclass
In preparation to switch the multiprocessing startup method away from
"fork", we cannot rely on os.fork cloning the log state at fork time.
Instead, we have to set up logging before we run the business logic of
each process.
* Make camera_metrics into a class
* Make ptz_metrics into a class
* Fixed PtzMotionEstimator.ptz_metrics type annotation
* Removed pointless variables
* Do not start audio processor when no audio cameras are configured
* POC: Added FastAPI with one endpoint (get /logs/service)
* POC: Revert error_log
* POC: Converted preview related endpoints to FastAPI
* POC: Converted two more endpoints to FastAPI
* POC: lint
* Convert all media endpoints to FastAPI. Added /media prefix (/media/camera && media/events && /media/preview)
* Convert all notifications API endpoints to FastAPI
* Convert first review API endpoints to FastAPI
* Convert remaining review API endpoints to FastAPI
* Convert export endpoints to FastAPI
* Fix path parameters
* Convert events endpoints to FastAPI
* Use body for multiple events endpoints
* Use body for multiple events endpoints (create and end event)
* Convert app endpoints to FastAPI
* Convert app endpoints to FastAPI
* Convert auth endpoints to FastAPI
* Removed flask app in favour of FastAPI app. Implemented FastAPI middleware to check CSRF, connect and disconnect from DB. Added middleware x-forwared-for headers
* Added starlette plugin to expose custom headers
* Use slowapi as the limiter
* Use query parameters for the frame latest endpoint
* Use query parameters for the media snapshot.jpg endpoint
* Use query parameters for the media MJPEG feed endpoint
* Revert initial nginx.conf change
* Added missing even_id for /events/search endpoint
* Removed left over comment
* Use FastAPI TestClient
* severity query parameter should be a string
* Use the same pattern for all tests
* Fix endpoint
* Revert media routers to old names. Order routes to make sure the dynamic ones from media.py are only used whenever there's no match on auth/etc
* Reverted paths for media on tsx files
* Deleted file
* Fix test_http to use TestClient
* Formatting
* Bind timeline to DB
* Fix http tests
* Replace filename with pathvalidate
* Fix latest.ext handling and disable uvicorn access logs
* Add cosntraints to api provided values
* Formatting
* Remove unused
* Remove unused
* Get rate limiter working
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Moved FrigateApp.init_config() into FrigateConfig.load()
* Move frigate config loading into main
* Store PlusApi in FrigateConfig
* Register SIGTERM handler in main
* Ensure logging is setup during config parsing
* Removed pointless try
* Moved config initialization out of FrigateApp
* Made FrigateApp.shm_frame_count into a function
* Removed log calls from signal handlers
python's logging calls are not re-entrant, which caused at least one of
these to deadlock randomly.
* Reopen stdout/err on process fork
This helps avoid deadlocks (https://github.com/python/cpython/issues/91776).
* Make mypy happy
* Whoops. I might have forgotten to save.
Truly an amateur mistake.
* Always call FrigateApp.stop()
* Ignore entire __pycache__ folder instead of individual *.pyc files
* Ignore .mypy_cache in git
* Rework config YAML parsing to use only ruamel.yaml
PyYAML silently overrides keys when encountering duplicates, but ruamel
raises and exception by default. Since we're already using it elsewhere,
dropping PyYAML is an easy choice to make.
* Added EnvString in config to slim down runtime_config()
* Added gitlens to devcontainer
* Automatically call FrigateConfig.runtime_config()
runtime_config needed to be called manually before. Now, it's been
removed, but the same code is run by a pydantic validator.
* Fix handling of missing -segment_time
* Removed type annotation on FrigateConfig's parse
I'd like to keep them, but then mypy complains about some fundamental
errors with how the pydantic model is structured. I'd like to fix it,
but I'd rather work towards moving some of this config to the database.