Fix for
[https://github.com/Unleash/unleash/security/code-scanning/81](https://github.com/Unleash/unleash/security/code-scanning/81)
To prevent information exposure through stack traces, ensure that the
HTTP response sent to clients contains only sanitized, generic error
information, such as a status code and a simple message. Internal
details (including stack traces, error types, or internal error codes)
should not be sent to the client. These can be safely logged on the
server for debugging.
**The fix:**
- Do not return the entire `finalError` object as JSON to the client, as
it may include fields like `stack` or `internalMessage`.
- Instead, return only a subset of fields that are safe to expose to the
user, in this case just `message` .
- Log the full error and any debugging details using the server-side
logger **as currently done**.
---
_Suggested fixes powered by Copilot Autofix. Review carefully before
merging._
---------
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
# Summary
Add optional lazy collection with TTL to our createGauge wrapper,
allowing a gauge to fetch its value on scrape and cache it for a
configurable duration. This lets us register a collect function directly
at gauge declaration without changing existing call sites or behavior.
We're experimenting with this, reason why we're only applying the
solution to `users_total` and will evaluate afterwards.
# Problem
- Some gauges should be computed on scrape (e.g., expensive or external
lookups) instead of being pushed continuously.
- Our current `createGauge` helper doesn’t make it easy to attach a
`collect` with caching. Each caller has to reimplement timing, caching,
and error handling.
- This leads to repeated costly work, inconsistent handling of unknown
values, and boilerplate.
# What changed
- `createGauge` now accepts two optional options in addition to the
usual prom-client options:
- `fetchValue?: () => Promise<number | null>`
- `ttlMs?: number`
- When `fetchValue` is provided:
- We install a `collect` that fetches on scrape.
- Successful values are cached for `ttlMs` milliseconds (if `ttlMs` >
0).
- If `ttlMs` is 0 or omitted, we fetch on every scrape.
- If `fetchValue` returns null or throws, we set `NaN` (indicates
`"unknown"`).
# Behavior details
## Caching:
- A value is “fresh” when successfully fetched within `ttlMs`.
- Only numeric successes are cached. null and errors are not cached;
we’ll refetch on the next scrape.
## Unknown values:
- null or thrown errors set the gauge to `NaN` so Prometheus won’t treat
it as zero.
## Compatibility:
- Backward compatible. Existing uses of `createGauge` are unchanged.
If a user-supplied `collect` exists, it still runs after the TTL logic
(can overwrite the value by design).
- API remains the same for the returned wrapper: `{ gauge, labels,
reset, set }`.
## About the changes
This would allow users to add test statements to protect from concurrent
modifications. From
https://github.com/orgs/Unleash/discussions/10707#discussioncomment-14602784
E.g.
If you had this feature flag configuration
```
{
"name": "flexibleRollout",
"constraints": [
{
"contextName": "a",
"operator": "IN",
"values": [
"100", "200", "300", "400", "500"
],
"caseInsensitive": false,
"inverted": false
}
],
"parameters": {
"rollout": "100",
"stickiness": "default",
"groupId": "api-access"
},
"variants": [],
"segments": [
122
],
"disabled": false
}
```
And you'd like to remove the value 300 from the constraints, you'd have
to first get the current values and then PATCH the strategy with the
following body:
```
[{ "op": "remove", "path": "/constraints/0/values/2" }]
```
This could fail in case of concurrent modifications (e.g. if someone
removed the value "100", then the index to remove "300" will no longer
be 2).
With the test operation, you'd be able to add a protection mechanism to
validate that the value at index 2 is still 300:
```
[
{ "op": "test", "path": "/constraints/0/values/2", "value": "300" },
{ "op": "remove", "path": "/constraints/0/values/2" }
]
```
If the test fails, the remove operation will not be applied.
I've tested this locally and works as expected:
1. If the value is still 300, it will remove it
2. The operation will fail if the value is no longer 300 because of
another change.
We could have users updated at the exact same time, so we need to
include that timestamp in the next fetch to avoid missing users, but
also include the latest user id so we can exclude the ones already
fetched.
The following test shows how to process pages with this new method:
c03df86ee0/src/lib/features/users/user-updates-read-model.test.ts (L39-L65)
## About the changes
In our code, we're not expecting constraints to be null:
https://github.com/search?q=repo%3AUnleash%2Funleash%20jsonb_array_elements(constraints)&type=code
It's unlikely to get a null value in constraints when using our API or
UI, but there might be cases where this can happen, as we saw in our
logs:
```
error: select "context_fields"."name", "context_fields"."description", "context_fields"."stickiness", "context_fields"."sort_order", "context_fields"."legal_values", "context_fields"."created_at", COUNT(DISTINCT CASE
WHEN features.archived_at IS NULL
THEN feature_strategies.project_name
END) AS used_in_projects, COUNT(DISTINCT CASE
WHEN features.archived_at IS NULL
THEN feature_strategies.feature_name
END) AS used_in_features from "context_fields" LEFT JOIN feature_strategies ON EXISTS (
SELECT 1
FROM jsonb_array_elements(feature_strategies.constraints) AS elem
WHERE elem ->> 'contextName' = context_fields.name
) left join "features" on "features"."name" = "feature_strategies"."feature_name" group by "context_fields"."name", "context_fields"."description", "context_fields"."stickiness", "context_fields"."sort_order", "context_fields"."created_at" order by "name" asc - cannot extract elements from a scalar
```
which is likely due to:
`jsonb_array_elements(feature_strategies.constraints)` with null
constraints
For this reason, it seems reasonable to enforce the constraint at the
database level.
This PR cleans up the etagVariant flag. These changes were automatically
generated by AI and should be reviewed carefully.
Fixes#10711
## 🧹 AI Flag Cleanup Summary
This PR removes the `etagVariant` feature flag, making the versioned
ETag format
(`v2`) the default and only behavior for the client features API.
### 🚮 Removed
- **Feature Flag**
- Removed the `etagVariant` flag definition from `experimental.ts`.
- Removed conditional logic for ETag generation in
`client-feature-toggle.controller.ts`.
- **Testing**
- Removed parameterized tests for both states of the flag in
`feature.optimal304.e2e.test.ts`.
- Removed configuration of the `etagVariant` flag in test setup.
### 🛠 Kept
- **ETag Generation**
- The logic to generate ETags with a version suffix (`v1`) is now the
standard
behavior.
- **Testing**
- Tests have been updated to exclusively assert the presence of the `v1`
suffix in ETags.
### 📝 Why
The `etagVariant` feature flag has been successfully rolled out and is
now
considered complete. By removing the flag, we are simplifying the
codebase by
eliminating conditional paths and making the improved ETag format
permanent.
This change ensures all client API responses for features include a
versioned
ETag, which helps with cache-busting when the ETag format changes in the
future.
---------
Co-authored-by: unleash-bot <194219037+unleash-bot[bot]@users.noreply.github.com>
Co-authored-by: Gastón Fournier <gaston@getunleash.io>
## About the changes
When deleting a user we set the email to null and deleted_at to the
current date, there's no case where email is set and deleted_at is also
set.
We found some situations where this happens, specifically when SAML and
SCIM are used in conjunction
## About the changes
Having SCIM enabled with SAML and auto-create can generate issues with
each protocol stepping into the other protocol's toes.
This PR adds protection to avoid updating SCIM-managed users with SAML
data (cause SCIM will override this data later).
It also adds a new method in the store to check if we have cases where
deleted_at is set but the email is not cleared, and there's no delete
event in the audit log (we've found one case, and we believe it might be
related to interoperability issues between SAML and SCIM)
This PR cleans up the addConfiguration flag. These changes were
automatically generated by AI and should be reviewed carefully.
Fixes#10627
## 🧹 AI Flag Cleanup Summary
This change removes the `addConfiguration` feature flag. The feature was
discarded, so this cleanup reverts the UI to its state before the
`addConfiguration` flag was introduced. The primary change is in the
strategy
menu, where the single "Add configuration" button is removed and the
original
"Use template," "Add strategy," and "More" buttons are restored.
### 🚮 Removed
- **Flag Definitions**
- `addConfiguration` flag from `experimental.ts` on the backend.
- `addConfiguration` flag from `uiConfig.ts` on the frontend.
- `addConfiguration: true` from the `server-dev.ts` config.
- **UI Components & Logic**
- The conditional rendering in `FeatureStrategyMenu.tsx` that showed an
"Add
configuration" button.
- The `useUiFlag('addConfiguration')` hook call and its import from
`FeatureStrategyMenu.tsx`.
### 🛠 Kept
- **UI Components & Logic**
- The original set of buttons in `FeatureStrategyMenu.tsx`: "Use
template",
"Add strategy", and a "More strategies" icon button. This was the code
path for
when the flag was disabled.
### 📝 Why
The `addConfiguration` feature flag was marked as completed with the
outcome
"discarded". This cleanup removes the flag and all related code,
preserving only
the intended code path, which is the UI behavior from before the flag
was
introduced.
---------
Co-authored-by: unleash-bot <194219037+unleash-bot[bot]@users.noreply.github.com>
Co-authored-by: Nuno Góis <github@nunogois.com>
This PR cleans up the reportUnknownFlags flag. These changes were
automatically generated by AI and should be reviewed carefully.
Fixes#10595
## 🧹 AI Flag Cleanup Summary
This change removes the `reportUnknownFlags` feature flag and makes its
functionality a permanent part of the application. The "Unknown flags"
feature
is now always enabled.
### 🚮 Removed
- **Flag Definitions**
- Removed `reportUnknownFlags` from `IFlagKey` and `UiFlags` types.
- Removed `reportUnknownFlags` from the experimental flags configuration
in `src/lib/types/experimental.ts`.
- Removed the flag from development and test configurations
(`src/server-dev.ts`, `unknown-flags.e2e.test.ts`).
- **Conditional Logic**
- Removed conditional checks for `reportUnknownFlags` in backend
services
(`UnknownFlagsService`, `ClientMetricsServiceV2`) and API controllers
(`UnknownFlagsController`).
- Removed `useUiFlag('reportUnknownFlags')` and related conditional
rendering from frontend components (`UnknownFlagsTable`,
`FeatureToggleListTable`). The UI elements are now always visible.
- Modified the `useUnknownFlags` hook to always fetch data.
### 🛠 Kept
- **Core Functionality**
- The feature to report and display unknown flags is now always active.
- The "Unknown flags" link is now permanently visible on the feature
flags
overview page.
- Backend logic for processing and storing unknown flags is now always
executed.
### 📝 Why
The `reportUnknownFlags` feature flag was marked as completed with the
feature
being kept. This cleanup removes the flag and its associated conditional
logic,
simplifying the code and making the unknown flags reporting a permanent
feature.
---------
Co-authored-by: unleash-bot <194219037+unleash-bot[bot]@users.noreply.github.com>
Co-authored-by: Nuno Góis <github@nunogois.com>