This PR cleans up the etagByEnv flag. These changes were automatically
generated by AI and should be reviewed carefully.
Fixes#10556
## 🧹 AI Flag Cleanup Summary
This change removes the `etagByEnv` feature flag and makes its
functionality
permanent. This modifies the ETag generation for client API requests to
be
environment-specific, improving caching efficiency. The cleanup involved
removing the flag definition, updating the controller logic to
permanently use
the environment-specific ETag calculation, and refactoring the
corresponding E2E
tests to only cover the enabled behavior.
### 🚮 Removed
- **Flag Definition**
- The `etagByEnv` flag from `IFlagKey` in
`src/lib/types/experimental.ts`.
- **Conditional Logic**
- The check for the `etagByEnv` flag in `calculateMeta` method in
`src/lib/features/client-feature-toggles/client-feature-toggle.controller.ts`.
- **Tests**
- Test cases in `src/test/e2e/api/client/feature.optimal304.e2e.test.ts`
that
covered the disabled state of the `etagByEnv` flag.
### 🛠 Kept
- **Environment-Specific ETags**
- The behavior of generating environment-specific ETags is now the
default and
only behavior.
- **Tests**
- E2E tests verifying the functionality of environment-specific ETags
have
been retained and simplified.
### 📝 Why
The `etagByEnv` feature has been successfully rolled out and validated.
By
removing the feature flag and associated conditional logic, we simplify
the
codebase, reduce complexity, and make it easier to maintain. This change
makes
the improved ETag generation strategy a permanent part of the system.
---------
Co-authored-by: unleash-bot <194219037+unleash-bot[bot]@users.noreply.github.com>
Co-authored-by: Gastón Fournier <gaston@getunleash.io>
## About the changes
In the previous fix: https://github.com/Unleash/unleash/pull/10543, we
made sure client token types were displayed in the UI. Here, we're also
making sure that the Backend token types are displayed as well.
1. A test validates that if a backend token exists in the db it will be
returned in the API response.
2. The UI has been adapted to also consider backend token types
Internally token types are still identified as CLIENT, therefore when we
filter the ones we're allowed to see, we should still consider them as
CLIENT tokens not BACKEND tokens. This is internal until we can fully
remove CLIENT with the next major.
This PR deprecates `CLIENT` api token type in favor of `BACKEND` but
both will continue working.
Also replaces:
- `INIT_CLIENT_API_TOKENS` with `INIT_BACKEND_API_TOKENS`. The former is
kept for backward compatibility.
## About the changes
This ignores Change Request event types when calculating the etag
because Change Request events don't change data.
They were being included when the change request event contained a
featureName. After this change, those should be excluded.
When deleting stale sessions, we sort them by createdAt. If both
sessions are created with the same createdAt, there's a chance we get a
different sort order and we end up with the wrong order:
https://github.com/Unleash/unleash/actions/runs/16438565746/job/46453700977
I think adding 10ms between inserts should be enough (1ms should do,
but this gives me more confidence and doesn't hurt that much)
---------
Co-authored-by: Thomas Heartman <thomas@getunleash.io>
Fixes a bug in the instance store where insert and bulkUpsert would
overwrite existing properties if there was a row there already. Now
it'll ignore any properties that are undefined.
The implementation is lifted directly from
`src/lib/db/client-applications-store.ts` (line 107 atm).
Additionally, I've renamed the `insert` method to `upsert` to make it
clearer what it does (and because we already have `bulkUpsert`). The
method seems to only be used in tests, anyway. I do not anticipate any
changes to be required in enterprise (I've checked).
## Discussion points:
This implementation uses `delete` to remove properties from the object.
Why didn't I do it some other way? Two main reasons:
1. We've had this implementation for 4 years in the client applications
store. If there were serious issues with it, we'd probably know by know.
(Probably.)
2. The only way I can think of without deleting, would be to use
`Object.fromEntries` and `Object.toEntries` and either map or reduce.
That'll double the amount of property iterations we'll need to do.
So naively, this strikes me as being more efficient. If you know better
solutions, I will of course be happy to take them. If not, I'd like to
leave this as is and then change it if we see that it's causing issues.
Fixes a bug where `registerInstance` and
`register{Frontend|Backend}Client` would overwrite each other's data in
the instance service, leading to the bulk update being made with partial
data, often missing SDK version. There's a different issue in the actual
store that causes sdk version and type to be overwritten when it's
updated (because we don't use `setLastSeen` anymore), but I'll handle
that in a different PR.
This PR adds tests for the changes I've made. Additionally, I've made
these semi-related bonus changes:
- In registerInstance, don't expect a partial `IClientApp`. We used to
validate that it was actual a metrics object instead. Instead, update
the signature to expect the actual properties we need from the cilent
metrics schema and set a default for instanceId the way Joi did.
- In `metrics.ts`, use the `ClientMetricsSchema` type in the function
signature, so that the request body is correctly typed in the function
(instead of being `any`).
- Delete two unused properties from the`createApplicationSchema`. They
would get ignored and were never used as far as I can tell. (`appName`
is taken from the URL, and applications don't store `sdkVersion`
information).
- Add `sdkVersion` to `IClientApp` because it's used in instance
service.
I've been very confused about all the weird type shenanigans we do in
the instance service (expecting `IClientApp`, then validating with a
different Joi schema etc). I think this makes it a little bit better and
updates the bits I'm touching, but I'm happy to take input if you
disagree.
## About the changes
Users could have been created in Unleash without a corresponding event
(a.k.a. audit log), due to a non transactional user insert
([fix](https://github.com/Unleash/unleash/pull/10327)). This could have
happened because of providing the wrong role id or some other causes
we're not aware of.
This amends the situation by inserting an event for each user that
exists in the instance (not deleted) and doesn't have it's corresponding
user-created event.
The event is inserted as already announced because this happened in the
past.
The event log will look like this (simulated the situation in local
dev):
```json
{
"id": 11,
"type": "user-created",
"createdBy": "unleash_system_user",
"createdAt": "2025-07-08T16:06:17.428Z",
"createdByUserId": null,
"data": {
"id": "6",
"email": "xyz@three.com"
},
"preData": null,
"tags": [],
"featureName": null,
"project": null,
"environment": null,
"label": "User created",
"summary": "**unleash_system_user** created user ****"
}
```
The main problem is we can't create the event in the past, so this will
have to do it
## About the changes
When inserting a user with an invalid role id, the user creation will
succeed but there will be no record in the audit log.
The API call returns a 400 misleading you to believe the user was not
created, but it actually was.
This makes the whole user creation transactional, so if something fails,
data will be in the right state.
## Testing
The e2e test was split in 2 scenarios, one with smtp and another one
without.
This test was added, and it was failing before adding the transaction,
because when fetching the users, the user was there, despite having
returned a 400 error in the API call:
80a2e65b6f/src/test/e2e/api/admin/user-admin.e2e.test.ts (L181-L204)
#10121 points out that we're using md5 functions still. This PR updates
our migrations to no longer use md5 at all (so if you haven't run the
migrations, you won't get email hashes until you get to the included
migration with this PR). If you've already run the migrations, we'll
drop the existing `email_hash varchar(32)` column and replace it with a
`email_hash TEXT` column.
We're also replacing the md5 function with `encode(sha256(email),
'hex')`. encode has been supported since PG10, sha256 came with PG11.
Do we want an index on the email_hash? I wasn't sure, but if we want to
do lookup we probably should have an index on it (though not a unique
one)
**BREAKING CHANGE**: DEFAULT_ENV changed from `default` (should not be
used anymore) to `development`
## About the changes
- Only delete default env if the install is fresh new.
- Consider development the new default. The main consequence of this
change is that the default is no longer considered `type=production`
environment but also for frontend tokens due to this assumption:
724c4b78a2/src/lib/schema/api-token-schema.test.ts (L54-L59)
(I believe this is mostly due to the [support for admin
tokens](https://github.com/Unleash/unleash/pull/10080#discussion_r2126871567))
- `feature_toggle_update_total` metric reports `n/a` in environment and
environment type as it's not environment specific
BREAKING CHANGE: As part of the preparation for a new major (7.0) this
removes /api/admin/projects/{projectId} endpoint. It has been deprecated
since 5.8, and we don't use it anymore in our frontend.
This removes a strategy that was already deprecated, but only for new
installations.
I tested starting with an installation with this strategy being used and
then updating, and I was still able to edit the strategy, so this should
not impact current users.
On a fresh install the strategy is no longer available.
---------
Co-authored-by: Nuno Góis <github@nunogois.com>
BREAKING CHANGE: This removes the
GET /api/admin/projects/{project}/features/{featureName}/variants
PATCH /api/admin/projects/{project}/features/{featureName}/variants
PUT /api/admin/projects/{project}/features/{featureName}/variants
endpoints
Users should move to environment or strategy specific variant methods
rather than feature level variant methods.
Unleash is being too reactive to events inside Unleash. We should not
update etag if feature is created or tag is added to feature.
This PR adds this condition and adds test for it.
Now we can receive custom metrics, return those for UI and have extra
prometheus endpoint for it.
---------
Co-authored-by: Christopher Kolstad <chriswk@getunleash.io>
Blocks deletion of context fields that are in use and updates the
"active usage" count to exclude use in archived flags.
- Before allowing you to delete a context field, checks if it is in use
by any strategies. If so, returns a 409 error.
- Updates what we count as "in use" to exclude flags that have been
archived.
BREAKING CHANGE: Context fields can no longer be deleted if they are in
use by active (non-archived) flags.
Vitest Pros:
* Automated failing test comments on github PRs
* A nice local UI with incremental testing when changing files (`yarn
test:ui`)
* Also nicely supported in all major IDEs, click to run test works (so
we won't miss what we had with jest).
* Works well with ESM
Vitest Cons:
* The ESBuild transformer vitest uses takes a little longer to transform
than our current SWC/jest setup, however, it is possible to setup SWC as
the transformer for vitest as well (though it only does one transform,
so we're paying ~7-10 seconds instead of ~ 2-3 seconds in transform
phase).
* Exposes how slow our tests are (tongue in cheek here)
https://linear.app/unleash/issue/2-3564/remove-filterexistingflagnames-feature-flag
We're removing the `filterExistingFlagNames` feature flag since we've
decided we want this to be the default behavior.
We don't need to rush to merge it, just in case we need to disable this
for any reason. However it should also be pretty easy to just revert if
needed.
Changes in tests are a bit tricky since they assumed the previous
behavior where we always registered metrics, even for non existing flag
names. `cachedFeatureNames` is also memoized with a TTL of 10s, so the
easiest way to overcome this was to override `cachedFeatureNames` to
return what we expected. As long as they return the same flag names that
we expect, we're able to register their metrics.
Let me know if you can think of a better approach.
Improves handling of constraints in use that have been deleted.
This change implments a few small changes on both the front and the back
end on how we deal with constraints that have been deleted.
The most important change is on the back end, in the
`/constraints/validate` endpoint. We used to throw here if the
constraint couldn't be found, but the only reason we wanted to look for
the constraint in the db was to check for legal values. Now, instead,
we'll allow you to pass a constraint field that doesn't exist in the
database. We'll still check the values against the operator for
validity, we just don't control legal values anymore (because there
aren't any).
On the front end, we improve the handling by showing the deleted context
filed in the dropdown, both when the selector dropdown is closed and
when it is open. However, if you change the context field, we remove the
deleted field from the list. This seems like a sensible tradeoff. Means
you can't select it if you've deselected it.
We're migrating to ESM, which will allow us to import the latest
versions of our dependencies.
Co-Authored-By: Christopher Kolstad <chriswk@getunleash.io>
Add `getProjectLinkTemplates` method to ProjectStore and corresponding
test. Ideally this should be in a read-model, but let's finish link
templates end to end