This is still raw and experimental.
We started to pull deleted features from event payload.
Now we put full query towards read model.
Co-Author: @FredrikOseberg
This PR refactors the method that listens on revision changes:
- Now supports all environments
- Removed unnecessary populate cache method
# Discussion point
In the listen method, should we implement logic to look into which
environments the events touched? By doing this we would:
- Reduce cache size
- Save some memory/CPU if the environment is not initialized in the
cache, because we could skip the DB calls.
This is based on the exising client feature toggle store, but some
alterations.
1. We support all of the querying it did before.
2. Added support to filter by **featureNames**
3. Simplified logic, so we do not have admin API logic
- no return of tags
- no return of last seen
- no return of favorites
- no playground logic
Next PR will try to include the revision ID.
This is not changing existing logic.
We are creating a new endpoint, which is guarded behind a flag.
---------
Co-authored-by: Simon Hornby <liquidwicked64@gmail.com>
Co-authored-by: FredrikOseberg <fredrik.no@gmail.com>
Added more tests around specific plans. Also added snapshot as per our
conversation @gastonfournier, but I'm unsure how much value it will give
because it seems that the tests should already catch this using
respondWithValidation and the OpenAPI schema. The problem here is that
empty array is a valid state, so there were no reason for the schema to
break the tests.
From 13 seconds to 0.1 seconds.
1. Joining 1 million events to projects/features is slow. **Solved by
using CTE.**
2. Running grouping on 1 million rows is slow. **Solved by adding
index.**
We need this PR to correctly set up CORS for streaming-related endpoints
in our spike and add the flag to our types.
---------
Co-authored-by: kwasniew <kwasniewski.mateusz@gmail.com>
This PR removes all references to the `featuresExportImport` flag.
The flag was introduced in [PR
#3411](https://github.com/Unleash/unleash/pull/3411) on March 29th 2023,
and the flag was archived on April 3rd. The flag has always defaulted to
true.
We've looked at the project that introduced the flag and have spoken to CS about it: we can find no reason to keep the flag around. So well remove it now.
<!-- Thanks for creating a PR! To make it easier for reviewers and
everyone else to understand what your changes relate to, please add some
relevant content to the headings below. Feel free to ignore or delete
sections that you don't think are relevant. Thank you! ❤️ -->
## About the changes
<!-- Describe the changes introduced. What are they and why are they
being introduced? Feel free to also add screenshots or steps to view the
changes if they're visual. -->
### Summary
- Add `PROJECT_ARCHIVED` event on `EVENT_MAP` to use
- Add a test case for `PROJECT_ARCHIVED` event formatting
- Add `PROJECT_ARCHIVED` event when users choose which events they send
to Slack
- Fix Slack integration document by adding `PROJECT_ARCHIVED`
### Example
The example message looks like the image below. I covered my email with
a black rectangle to protect my privacy 😄
The link refers `/projects-archive` to see archived projects.
<img width="529" alt="Slack message example"
src="https://github.com/user-attachments/assets/938c639f-f04a-49af-9b4a-4632cdea9ca7">
## Discussion points
<!-- Anything about the PR you'd like to discuss before it gets merged?
Got any questions or doubts? -->
I considered the reason why Unleash didn't implement to send
`PROJECT_ARCHIVED` message to Slack integration.
One thing I assumed that it is impossible to create a new project with
open source codes, which means it is only enabled in the enterprise
plan. However,
[document](https://docs.getunleash.io/reference/integrations/slack-app#events)
explains that users can send `PROJECT_CREATED` and `PROJECT_DELETED`
events to Slack, which are also available only in the enterprise plan,
hence it means we need to embrace all worthwhile events.
I think it is reasonable to add `PROJECT_ARCHIVED` event to Slack
integration because users, especially operators, need to track them
through Slack by separating steps, `_CREATED`, `_ARCHIVED`, and
`_DELETED`.
We want to prevent our users from defining multiple templates with the
same name. So this adds a unique index on the name column when
discriminator is template.
This PR fixes three things that were wrong with the lifecycle summary
count query:
1. When counting the number of flags in each stage, it does not take
into account whether a flag has moved out of that stage. So if you have
a flag that's gone through initial -> pre-live -> live, it'll be counted
for each one of those steps, not just the last one.
2. Some flags that have been archived don't have the corresponding
archived state row in the db. This causes them to count towards their
other recorded lifecycle stages, even when they shouldn't. This is
related to the previous one, but slightly different. Cross-reference the
features table's archived_at to make sure it hasn't been archived
3. The archived number should probably be all flags ever archived in the
project, regardless of whether they were archived before or after
feature lifecycles. So we should check the feature table's archived_at
flag for the count there instead
This PR:
- conditionally deprecates the project health report endpoint. We only
use this for technical debt dashboard that we're removing. Now it's
deprecated once you turn the simplifiy flag on.
- extracts the calculate project health function into the project health
functions file in the appropriate domain folder. That same function is
now shared by the project health service and the project status service.
For the last point, it's a little outside of how we normally do things,
because it takes its stores as arguments, but it slots in well in that
file. An option would be to make a project health read model and then
wire that up in a couple places. It's more code, but probably closer to
how we do things in general. That said, I wanted to suggest this because
it's quick and easy (why do much work when little work do trick?).
This PR updates the project status service (and schemas and UI) to use
the project's current health instead of the 4-week average.
I nabbed the `calculateHealthRating` from
`src/lib/services/project-health-service.ts` instead of relying on the
service itself, because that service relies on the project service,
which relies on pretty much everything in the entire system.
However, I think we can split the health service into a service that
*does* need the project service (which is used for 1 of 3 methods) and a
service (or read model) that doesn't. We could then rely on the second
one for this service without too much overhead. Or we could extract the
`calculateHealthRating` into a shared function that takes its stores as
arguments. ... but I suggest doing that in a follow-up PR.
Because the calculation has been tested other places (especially if we
rely on a service / shared function for it), I've simplified the tests
to just verify that it's present.
I've changed the schema's `averageHealth` into an object in case we want
to include average health etc. in the future, but this is up for debate.
This PR fixes the counting of unhealthy flags for the project status
page. The issue was that we were looking for `archived = false`, but we
don't set that flag in the db anymore. Instead, we set the `archived_at`
date, which should be null if the flag is unarchived.
**This migration introduces a query that calculates the licensed user
counts and inserts them into the licensed_users table.**
**The logic ensures that:**
1. All users created up to a specific date are included as active users
until they are explicitly deleted.
2. Deleted users are excluded after their deletion date, except when
their deletion date falls within the last 30 days or before their
creation date.
3. The migration avoids duplicating data by ensuring records are only
inserted if they don’t already exist in the licensed_users table.
**Logic Breakdown:**
**Identify User Events (user_events):** Extracts email addresses from
user-related events (user-created and user-deleted) and tracks the type
and timestamp of the event. This step ensures the ability to
differentiate between user creation and deletion activities.
**Generate a Date Range (dates):** Creates a continuous range of dates
spanning from the earliest recorded event up to the current date. This
ensures we analyze every date, even those without events.
**Determine Active Users (active_emails):** Links dates with user events
to calculate the status of each email address (active or deleted) on a
given day. This step handles:
- The user's creation date.
- The user's deletion date (if applicable).
**Calculate Daily Active User Counts (result):**
For each date, counts the distinct email addresses that are active based
on the conditions:
- The user has no deletion date.
- The user's deletion date is within the last 30 days relative to the
current date.
- The user's creation date is before the deletion date.
This PR adds the option to select potentially stale flags from the UI.
It also updates the name we use for parsing from the API: instead of
`potentiallyStale` we use `potentially-stale`. This follows the
precedent set by "kill switch" (which we send as 'kill-switch'), the
only other multi-word option that I could find in our filters.
This PR adds support for the `potentiallyStale` value in the feature
search API. The value is added as a third option for `state` (in
addition to `stale` and `active`). Potentially stale is a subset of
active flags, so stale flags are never considered potentially stale,
even if they have the flag set in the db.
Because potentially stale is a separate column in the db, this
complicates the query a bit. As such, I've created a specialized
handling function in the feature search store: if the query doesn't
include `potentiallyStale`, handle it as we did before (the mapping has
just been moved). If the query *does* contain potentially stale, though,
the handling is quite a bit more involved because we need to check
multiple different columns against each other.
In essence, it's based on this logic:
when you’re searching for potentially stale flags, you should only get flags that are active and marked as potentially stale. You should not get stale flags.
This can cause some confusion, because in the db, we don’t clear the potentially stale status when we mark a flag as stale, so we can get flags that are both stale and potentially stale.
However, as a user, if you’re looking for potentially stale flags, I’d be surprised to also get (only some) stale flags, because if a flag is stale, it’s definitely stale, not potentially stale.
This leads us to these six different outcomes we need to handle when your search includes potentially stale and stale or active:
1. You filter for “potentially stale” flags only. The API will give you only flags that are active and marked as potentially stale. You will not get stale flags.
2. You filter only for flags that are not potentially stale. You will get all flags that are active and not potentially stale and all stale flags.
3. You search for “is any of stale, potentially stale”. This is our “unhealthy flags” metric. You get all stale flags and all flags that are active and potentially stale
4. You search for “is none of stale, potentially stale”: This gives you all flags that are active and not potentially stale. Healthy flags, if you will.
5. “is any of active, potentially stale”: you get all active flags. Because we treat potentially stale as a subset of active, this is the same as “is active”
6. “is none of active, potentially stale”: you get all stale flags. As in the previous point, this is the same as “is not active”
This change adds a db migration to make the potentially_stale column
non-nullable. It'll set any NULL values to `false`.
In the down-migration, make the column nullable again.