This test was flaky once an hour because subminutes 3 made it fall into
the wrong bucket when tests were run exactly or minutes after the our
had passed.
Also, the databases created were created with the system clock. I
altered it to be explicitly UTC.
As part of preparation for ESM and node/TSC updates, this PR will make
Unleash build with strictNullChecks set to true, since that's what's in
our tsconfig file. Hence, this PR also removes the `--strictNullChecks
false` flag in our compile tasks in package.json.
TL;DR - Clean up your code rather than turning off compiler security
features :)
## About the changes
Some automation may keep some data up-to-date (e.g. segments). These
updates sometimes don't generate changes but we're still storing these
events in the event log and triggering reactions to those events.
Arguably, this could be done in each service domain logic, but it seems
to be a pretty straightforward solution: if preData and data are
provided, it means some change happened. Other events that don't have
preData or don't have data are treated as before.
Tests were added to validate we don't break other events.
## About the changes
Based on the first hypothesis from
https://github.com/Unleash/unleash/pull/9264, I decided to find an
alternative way of initializing the DB, mainly trying to run migrations
only once and removing that from the actual test run.
I found in [Postgres template
databases](https://www.postgresql.org/docs/current/manage-ag-templatedbs.html)
an interesting option in combination with jest global initializer.
### Changes on how we use DBs for testing
Previously, we were relying on a single DB with multiple schemas to
isolate tests, but each schema was empty and required migrations or
custom DB initialization scripts.
With this method, we don't need to use different schema names
(apparently there's no templating for schemas), and we can use new
databases. We can also eliminate custom initialization code.
### Legacy tests
This method also highlighted some wrong assumptions in existing tests.
One example is the existence of `default` environment, that because of
being deprecated is no longer available, but because tests are creating
the expected db state manually, they were not updated to match the
existing db state.
To keep tests running green, I've added a configuration to use the
`legacy` test setup (24 tests). By migrating these, we'll speed up
tests, but the code of these tests has to be modified, so I leave this
for another PR.
## Downsides
1. The template db initialization happens at the beginning of any test,
so local development may suffer from slower unit tests. As a workaround
we could define an environment variable to disable the db migration
2. Proliferation of test dbs. In ephemeral environments, this is not a
problem, but for local development we should clean up from time to time.
There's the possibility of cleaning up test dbs using the db name as a
pattern:
2ed2e1c274/scripts/jest-setup.ts (L13-L18)
but I didn't want to add this code yet. Opinions?
## Benefits
1. It allows us migrate only once and still get the benefits of having a
well known state for tests.
3. It removes some of the custom setup for tests (which in some cases
ends up testing something not realistic)
4. It removes the need of testing migrations:
https://github.com/Unleash/unleash/blob/main/src/test/e2e/migrator.e2e.test.ts
as migrations are run at the start
5. Forces us to keep old tests up to date when we modify our database
This is implementing the segments events for delta API. Previous version
of delta API, we were just sending all of the segments. Now we will have
`segment-updated` and `segment-removed `events coming to SDK.
Trying again, now with a tested function for resolvingIsOss.
Still want to test this on a pro instance in sandbox before we deploy
this to our customers to avoid what happened Friday.
---------
Co-authored-by: Gastón Fournier <gaston@getunleash.io>
This is not changing existing logic.
We are creating a new endpoint, which is guarded behind a flag.
---------
Co-authored-by: Simon Hornby <liquidwicked64@gmail.com>
Co-authored-by: FredrikOseberg <fredrik.no@gmail.com>
**This migration introduces a query that calculates the licensed user
counts and inserts them into the licensed_users table.**
**The logic ensures that:**
1. All users created up to a specific date are included as active users
until they are explicitly deleted.
2. Deleted users are excluded after their deletion date, except when
their deletion date falls within the last 30 days or before their
creation date.
3. The migration avoids duplicating data by ensuring records are only
inserted if they don’t already exist in the licensed_users table.
**Logic Breakdown:**
**Identify User Events (user_events):** Extracts email addresses from
user-related events (user-created and user-deleted) and tracks the type
and timestamp of the event. This step ensures the ability to
differentiate between user creation and deletion activities.
**Generate a Date Range (dates):** Creates a continuous range of dates
spanning from the earliest recorded event up to the current date. This
ensures we analyze every date, even those without events.
**Determine Active Users (active_emails):** Links dates with user events
to calculate the status of each email address (active or deleted) on a
given day. This step handles:
- The user's creation date.
- The user's deletion date (if applicable).
**Calculate Daily Active User Counts (result):**
For each date, counts the distinct email addresses that are active based
on the conditions:
- The user has no deletion date.
- The user's deletion date is within the last 30 days relative to the
current date.
- The user's creation date is before the deletion date.
Remove everything related to the connected environment count for project
status. We decided that because we don't have anywhere to link it to at
the moment, we don't want to show it yet.
Adding email_hash column to users table.
We will update all existing users to have hashed email.
All new users will also get the hash.
We are fine to use md5, because we just need uniqueness. We have emails
in events table stored anyways, so it is not sensitive.
This change introduces a new method `countProjectTokens` on the
`IApiTokenStore` interface. It also swaps out the manual filtering for
api tokens belonging to a project in the project status service.
This PR adds member, api token, and segment counts to the project status
payload. It updates the schemas and adds the necessary stores to get
this information. It also adds a new query to the segments store for
getting project segments.
I'll add tests in a follow-up.
This PR adds connected environments to the project status payload.
It's done by:
- adding a new `getConnectedEnvironmentCountForProject` method to the
project store (I opted for this approach instead of creating a new view
model because it already has a `getEnvironmentsForProject` method)
- adding the project store to the project status service
- updating the schema
For the schema, I opted for adding a `resources` property, under which I
put `connectedEnvironments`. My thinking was that if we want to add the
rest of the project resources (that go in the resources widget), it'd
make sense to group those together inside an object. However, I'd also
be happy to place the property on the top level. If you have opinions
one way or the other, let me know.
As for the count, we're currently only counting environments that have
metrics and that are active for the current project.
This change fixes a bug in the event filter's `to` query parameter.
The problem was that in an attempt to make it inclusive, we also
stripped it of the `IS:` prefix, which meant it had no effect. This
change fixes that by first splitting the value, fixing only the
date (because we want it to include the entire day), and then joining
it back together.
## About the changes
We have many aggregation queries that run on a schedule:
f63496d47f/src/lib/metrics.ts (L714-L719)
These staticCounters are usually doing db query aggregations that
traverse tables and we run all of them in parallel:
f63496d47f/src/lib/metrics.ts (L410-L412)
This can add strain to the db. This PR suggests a way of handling these
queries in a more structured way, allowing us to run them sequentially
(therefore spreading the load):
f02fe87835/src/lib/metrics-gauge.ts (L38-L40)
As an additional benefit, we get both the gauge definition and the
queries in a single place:
f02fe87835/src/lib/metrics.ts (L131-L141)
This PR only tackles 1 metric, and it only focuses on gauges to gather
initial feedback. The plan is to migrate these metrics and eventually
incorporate more types (e.g. counters)
---------
Co-authored-by: Nuno Góis <github@nunogois.com>
This PR updates the personal dashboard project endpoint to return owners
and roles. It also adds the impl for getting roles (via the access
store).
I'm filtering the roles for a project to only include project roles for
now, but we might wanna change this later.
Tests and UI update will follow.
Adds Unleash admins to the personal dashboard payload.
Uses the access store (and a new method) to fetch admins and maps it to
a new `MinimalUser` type. We already have a `User` class, but it
contains a lot of information we don't care about here, such as `isAPI`,
SCIM data etc.
In the UI, admins will be shown to users who are not part of any
projects. This is the default state for new viewer users, and can also
happen for editors if you archive the default project, for instance.
Tests in a follow-up PR