Changes to migration file -
20220603081324-add-archive-at-to-feature-toggle.js
1. Fixes the migration script where features table is updated with
archived_at column with the latest date instead of taking the date from
the events table.
2. Also it fails for the latest toggle which was archived and then
revived but after migration it updates the toggle as archived toggle.
3. Also because of the buggy reference to the outer join's events table
`e` its taking a huge time to run the migration.
This PR fixes all the above issues.
## About the changes
We have faced an issue during migration of unleash-server from 4.8.2 to
6.6.0 where one of our toggle which was archived and unarchived once
before migration and that was the latest toggle in terms of the archived
date, but after the migration was ran it was in archived status.
Upon further debugging and running the SQL command in the features table
migration file we noticed that we should not be referencing the outer
join's events table `e` for the feature_name check and additionally we
should add the features table's archived toggle check instead.
### Important files
The change in only in this file.
[20220603081324-add-archive-at-to-feature-toggle.js](https://github.com/Unleash/unleash/pull/9518/files#diff-b91c299b96edc46ca3a1963bf54966aa777c9fa107f3bd8b45f5fb54dc57460e)
## Discussion points
Let me know if any further details is required.
From 13 seconds to 0.1 seconds.
1. Joining 1 million events to projects/features is slow. **Solved by
using CTE.**
2. Running grouping on 1 million rows is slow. **Solved by adding
index.**
We want to prevent our users from defining multiple templates with the
same name. So this adds a unique index on the name column when
discriminator is template.
**This migration introduces a query that calculates the licensed user
counts and inserts them into the licensed_users table.**
**The logic ensures that:**
1. All users created up to a specific date are included as active users
until they are explicitly deleted.
2. Deleted users are excluded after their deletion date, except when
their deletion date falls within the last 30 days or before their
creation date.
3. The migration avoids duplicating data by ensuring records are only
inserted if they don’t already exist in the licensed_users table.
**Logic Breakdown:**
**Identify User Events (user_events):** Extracts email addresses from
user-related events (user-created and user-deleted) and tracks the type
and timestamp of the event. This step ensures the ability to
differentiate between user creation and deletion activities.
**Generate a Date Range (dates):** Creates a continuous range of dates
spanning from the earliest recorded event up to the current date. This
ensures we analyze every date, even those without events.
**Determine Active Users (active_emails):** Links dates with user events
to calculate the status of each email address (active or deleted) on a
given day. This step handles:
- The user's creation date.
- The user's deletion date (if applicable).
**Calculate Daily Active User Counts (result):**
For each date, counts the distinct email addresses that are active based
on the conditions:
- The user has no deletion date.
- The user's deletion date is within the last 30 days relative to the
current date.
- The user's creation date is before the deletion date.
This change adds a db migration to make the potentially_stale column
non-nullable. It'll set any NULL values to `false`.
In the down-migration, make the column nullable again.
Adding email_hash column to users table.
We will update all existing users to have hashed email.
All new users will also get the hash.
We are fine to use md5, because we just need uniqueness. We have emails
in events table stored anyways, so it is not sensitive.
We now have customers that exceed INT capacity, so we need to change
this to BIGINT in client_metrics_env_variants_daily as well.
Even heavy users only have about 10000 rows here, so should be a quick
enough operation.
Clearing onboarding tables, because the data is invalid and we want to
start tracking all of this for only new customers.
This migration must be applied after the new logic is implemented.
After adding an index, the time for the new event search on 100k events
decreased from 5000ms to 4ms. This improvement is due to the query using
an index scan instead of a sequence scan.
https://linear.app/unleash/issue/2-2435/create-migration-for-a-new-integration-events-table
Adds a DB migration that creates the `integration_events` table:
- `id`: Auto-incrementing primary key;
- `integration_id`: The id of the respective integration (i.e.
integration configuration);
- `created_at`: Date of insertion;
- `state`: Integration event state, as text. Can be anything we'd like,
but I'm thinking this will be something like:
- Success ✅
- Failed ❌
- SuccessWithErrors ⚠️
- `state_details`: Expands on the previous column with more details, as
text. Examples:
- OK. Status code: 200
- Status code: 429 - Rate limit reached
- No access token provided
- `event`: The whole event object, stored as a JSON blob;
- `details`: JSON blob with details about the integration execution.
Will depend on the integration itself, but for example:
- Webhook: Request body
- Slack App: Message text and an array with all the channels we're
posting to
I think this gives us enough flexibility to cover all present and
(possibly) future integrations, but I'd like to hear your thoughts.
I'm also really torn on what to call this table:
- `integration_events`: Consistent with the feature name. Addons are now
called integrations, so this would be consistent with the new thing;
- `addon_events`: Consistent with the existing `addons` table.
We'll store hashes for the last 5 passwords, fetch them all for the user
wanting to change their password, and make sure the password does not
verify against any of the 5 stored hashes.
Includes some password-related UI/UX improvements and refactors. Also
some fixes related to reset password rate limiting (instead of an
unhandled exception), and token expiration on error.
---------
Co-authored-by: Nuno Góis <github@nunogois.com>