We now have customers that exceed INT capacity, so we need to change
this to BIGINT in client_metrics_env_variants_daily as well.
Even heavy users only have about 10000 rows here, so should be a quick
enough operation.
Clearing onboarding tables, because the data is invalid and we want to
start tracking all of this for only new customers.
This migration must be applied after the new logic is implemented.
After adding an index, the time for the new event search on 100k events
decreased from 5000ms to 4ms. This improvement is due to the query using
an index scan instead of a sequence scan.
https://linear.app/unleash/issue/2-2435/create-migration-for-a-new-integration-events-table
Adds a DB migration that creates the `integration_events` table:
- `id`: Auto-incrementing primary key;
- `integration_id`: The id of the respective integration (i.e.
integration configuration);
- `created_at`: Date of insertion;
- `state`: Integration event state, as text. Can be anything we'd like,
but I'm thinking this will be something like:
- Success ✅
- Failed ❌
- SuccessWithErrors ⚠️
- `state_details`: Expands on the previous column with more details, as
text. Examples:
- OK. Status code: 200
- Status code: 429 - Rate limit reached
- No access token provided
- `event`: The whole event object, stored as a JSON blob;
- `details`: JSON blob with details about the integration execution.
Will depend on the integration itself, but for example:
- Webhook: Request body
- Slack App: Message text and an array with all the channels we're
posting to
I think this gives us enough flexibility to cover all present and
(possibly) future integrations, but I'd like to hear your thoughts.
I'm also really torn on what to call this table:
- `integration_events`: Consistent with the feature name. Addons are now
called integrations, so this would be consistent with the new thing;
- `addon_events`: Consistent with the existing `addons` table.
We'll store hashes for the last 5 passwords, fetch them all for the user
wanting to change their password, and make sure the password does not
verify against any of the 5 stored hashes.
Includes some password-related UI/UX improvements and refactors. Also
some fixes related to reset password rate limiting (instead of an
unhandled exception), and token expiration on error.
---------
Co-authored-by: Nuno Góis <github@nunogois.com>
We are getting questions from engineers, why I do not see lifecycle. The
same will happen with our customers. Now customers will see lifecycle
component unified across features.
I've tried to use/add the audit info to all events I could see/find.
This makes this PR necessarily huge, because we do store quite a few
events.
I realise it might not be complete yet, but tests
run green, and I think we now have a pattern to follow for other events.
## About the changes
This PR provides a service that allows a scheduled function to run in a
single instance. It's currently not in use but tests show how to wrap a
function to make it single-instance:
65b7080e05/src/lib/features/scheduler/job-service.test.ts (L26-L32)
The key `'test'` is used to identify the group and most likely should
have the same name as the scheduled job.
---------
Co-authored-by: Christopher Kolstad <chriswk@getunleash.io>
This column has not been used for 1.5 years and was replace by
**archived_at** column and people still get confused of why this is not
working as name suggests. Removing this column to remove technical debt.
SCIM synchronizations requires a stable id no matter how many changes
are made to username and email (our other unique fields). In addition,
exposing internal incremented database ids to an external service (our
current id field) feels insecure. Our plan is to create either a uuidv7
or ulid when scim operations are performed against the user, so the
external scim provisioner has a stable globally unique id to use to
refer to the users they're modifying.
Adds a migration to create and fill the `project_metrics_summary_trends`
This table is to be used in enterprise to update the metrics data daily
per project (after the aggregation of the hourly data)
Driving force for this was doing performance testing on the executive
dashboard.
This will allow to remove the expensive query to aggregate the data on
demand and will drop the total response time from 2.5sec to 125ms
(measurements done with 100 Projects, 10000 features and over 1M rows in
`client_metrics_env_daily`
Closes #
[1-2080](https://linear.app/unleash/issue/1-2080/create-the-project-metrics-summary-trends-table)
---------
Signed-off-by: andreas-unleash <andreas@getunleash.ai>
Follow up to: https://github.com/Unleash/unleash/pull/6298
We no longer need this table, since it was superseded by `action_events`
and is no longer used.
I believe it's safe to drop this table right away since the feature is
not being used yet.
https://linear.app/unleash/issue/2-1962/implement-new-action-events-logic
Adds a new `action_set_events` table for the new action events logic.
Even though observable events are technically immutable, we're storing
the observable event along with the action set event. This allows us to
avoid 1 join while allowing us to persist action set event information
after deleting observable events, if we wish to do so at a later stage.
## About the changes
Adds migration for creating table `stat_traffic_usage`.
This table primary-keys on day, traffic_group, and status_code_series.
Adds individual indexes for day, traffic_group, and status_code_series.
Traffic group is the grouping for API endpoints for which traffic is
counted.
status_code_series is 200/202 etc = 200, 304 etc = 300
On all pods and instances, we run the same revision update query every
second. It is relatively fast when the application has started. This is
the single most ran query in unleash.
Benchmarks:
1. Running pod with existing revisionID:
- old 5.5ms
- new 0.028ms
2. New pod without existing revisionID
- old 9.329ms
- new 0.033ms
This query is getting optimized
7e66a79f9f/src/lib/features/events/event-store.ts (L161)
We have customers with tens or hundreds of thousands of applications,
and we have a scheduler running that sets application fields to
`announced` as true. However, every time it runs, it queries the entire
table, which is slow and causes database connection acquisition issues.
To make it faster, we added a partial index to the table.
## About the changes
Resets created_by_user_id on events incorrectly marked as -1337 when an actual user has been set in created_by column, to clean up after a bug
## About the changes
This PR replaces the old systemUser -1 in user-service.ts with the new
SYSTEM_USER -1337 and adds a migration to move events created_by = -1 to
-1337
## Discussion points
Does it make sense to do both of these things? Or should we skip the
migration? How would this behave in a large system with hundreds of
thousands of events, should this be split up?
Follow up of https://github.com/Unleash/unleash/issues/4303
We are adding primary keys to all tables missing them, currently
**role_permission**, **api_token_project**, and **project_stats**.
By adding primary keys, the issue with migrations failing during
upgrades in replicated database setups will be resolved.
### What
Adds Read and Write permissions for project administration settings
(user access, change request settings, default strategy, other).
### Why
On request from two large customers that wanted our RBAC controls to be
more granulated to easier be able to limit the access they granted their
users.
## About the changes
This admin token user will help us differentiate actions performed by
the system from actions performed with an admin token.
Events created with an admin token should have the id of this user as
createdByUserId property and the username of the token used as the
createdBy property. i.e.
```json
{
"id": 11,
"type": "pat-created",
"createdBy": "admin-token",
"createdAt": "2024-01-16T13:16:27.887Z",
"createdByUserId": -42,
"data": {
"description": "admin-pat",
"expiresAt": "2024-02-15T13:16:25.586Z",
"secret": "***",
"userId": 1
},
"preData": null,
"tags": [],
"featureName": null,
"project": null,
"environment": null
}
```
Updates it from 'system@getunleash.io' to `null`. We don't have that
address registered (and probably don't want it), so we'll leave it
empty.
This is a companion PR to
https://github.com/Unleash/unleash/pull/5893. With both of those
merged, the system user in the DB should match the one defined in
`core.ts`
This PR adds a new `reason` column to the change request schedules table
and populates it with the data that is in the `failure_reason` column.
This is the expand phase of the expand/contract pattern. The code in
enterprise will be updated to try and use the new column name, but fall
back to the old one if no value is present.
The old column can be removed later.
## About the changes
Creating an incoming webhook with an admin token means we can't
correlate the action with a real user. In this case we should support
null.
## About the changes
Migrations for:
- Adds column is_system to users
- Inserts unleash_system_user id -1337 to users
includes `is_system: false` in the activeUsers and activeAccounts where filter
Tested by running:
`
select * into users_pre_check from users where id > -1;
delete from users where id > -1;
`
before starting unleash, then inspecting users table after unleash has
started and verifying that an 'admin' user has been created.
---------
Co-authored-by: Christopher Kolstad <chriswk@getunleash.ai>
## About the changes
Replaces #5616
Renamed newly added `created_by` columns to `created_by_user_id` for
these tables:
features
feature_tag
feature_strategies
feature_types
role_permission
role_user
roles
users
api_tokens
## About the changes
Adds the column `created_by_user_id` to `events` table and adds index
for it
---------
Co-authored-by: Christopher Kolstad <chriswk@getunleash.ai>
As it says on the tin. In an attempt to make all operations in Unleash
traceable to an originator. This PR adds created_by to role_permission,
which will show which user assigned a permission to a role.
This PR addresses some cleanup related to removing the
useLastSeenRefactor flag:
* Added fallback last seen to the feature table last_seen_at column
* Remove foreign key on environment since we can not guarantee that we
will get valid data in this field
* Add environments to cleanup function
* Add test for cleanup environments
`EXECUTE FUNCTION` was introduced in Postgres v11. In Postgres v10 the
syntax was `EXECUTE PROCEDURE`. This fix changes the syntax to `EXECUTE
PROCEDURE`, which is perfectly fine sense our function does not return
anything.
This PR adds a db table for CR schedules. The table has two columns:
1. `change_request` :: This acts as both a foreign key and as the
primary key for this table.
2. `scheduled_at` :: When the change is scheduled to be applied.
We could use a separate ID column for these rows and put a `unique`
constraint on the `change_request` FK, but I don't think that adds any
more value. However, I'm happy to hear other thoughts around it.
https://linear.app/unleash/issue/2-1531/rename-message-banners-to-banners
This renames "message banners" to "banners".
I also added support for external banners coming from a `banner` flag
instead of only `messageBanner` flag, so we can eventually migrate to
the new one in the future if we want.
As part of more telemetry on the usage of Unleash.
This PR adds a new `stat_` prefixed table as well as a trigger on the
events table trigger on each insert to increment a counter per
environment per day.
The trigger will trigger on every insert into the events base, but will
filter and only increment the counter for events that actually have the
environment set. (there are events, like user-created, that does not
relate to a specific environment).
Bit wary on this, but since we truncate down to row per (day,
environment) combo, finding conflict and incrementing shouldn't take too
long here.
@ivarconr was it something like this you were considering?
## About the changes
Add partial index on events by announced. This should help avoid `Seq
Scan on events` when the majority of events are announced=true
---
Co-authored-by: Ivar Østhus <ivar@getunleash.io>
Co-authored-by: Gard Rimestad <gard@getunleash.io>