https://linear.app/unleash/issue/2-3932/cloned-environments-enable-disabled-strategies-unexpectedly
Cloning environments didn't work as expected. This fixes a few of
issues:
- Disabled strategies remain disabled after cloning
- All strategy properties are cloned (including e.g. title)
- Strategy cloning respects the selected projects
- Release plans and their milestones are now correctly cloned
Fix for
[https://github.com/Unleash/unleash/security/code-scanning/81](https://github.com/Unleash/unleash/security/code-scanning/81)
To prevent information exposure through stack traces, ensure that the
HTTP response sent to clients contains only sanitized, generic error
information, such as a status code and a simple message. Internal
details (including stack traces, error types, or internal error codes)
should not be sent to the client. These can be safely logged on the
server for debugging.
**The fix:**
- Do not return the entire `finalError` object as JSON to the client, as
it may include fields like `stack` or `internalMessage`.
- Instead, return only a subset of fields that are safe to expose to the
user, in this case just `message` .
- Log the full error and any debugging details using the server-side
logger **as currently done**.
---
_Suggested fixes powered by Copilot Autofix. Review carefully before
merging._
---------
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Makes it so that the default width calculation for the highlighter
plugin attempts to first use the number of entries in the data set for
the x axis, but falls back to using the number of categories if that is
not available. This is probably the more "correct" / "do what I mean"
approach to setting highlighter width.
Fixes an issue where the highlight would be too wide if the labels were
set to "auto" instead of "data".
The width function is only used in two places (the archived:created
graph and the network graph). Both of those work fine with the new
update.
Before:
<img width="831" height="401" alt="image"
src="https://github.com/user-attachments/assets/8487b95f-cc49-4ff6-a519-7f79e1048eed"
/>
After:
<img width="886" height="378" alt="image"
src="https://github.com/user-attachments/assets/ad2102cb-3342-4a28-aa54-6b31caa495e1"
/>
# Summary
Add optional lazy collection with TTL to our createGauge wrapper,
allowing a gauge to fetch its value on scrape and cache it for a
configurable duration. This lets us register a collect function directly
at gauge declaration without changing existing call sites or behavior.
We're experimenting with this, reason why we're only applying the
solution to `users_total` and will evaluate afterwards.
# Problem
- Some gauges should be computed on scrape (e.g., expensive or external
lookups) instead of being pushed continuously.
- Our current `createGauge` helper doesn’t make it easy to attach a
`collect` with caching. Each caller has to reimplement timing, caching,
and error handling.
- This leads to repeated costly work, inconsistent handling of unknown
values, and boilerplate.
# What changed
- `createGauge` now accepts two optional options in addition to the
usual prom-client options:
- `fetchValue?: () => Promise<number | null>`
- `ttlMs?: number`
- When `fetchValue` is provided:
- We install a `collect` that fetches on scrape.
- Successful values are cached for `ttlMs` milliseconds (if `ttlMs` >
0).
- If `ttlMs` is 0 or omitted, we fetch on every scrape.
- If `fetchValue` returns null or throws, we set `NaN` (indicates
`"unknown"`).
# Behavior details
## Caching:
- A value is “fresh” when successfully fetched within `ttlMs`.
- Only numeric successes are cached. null and errors are not cached;
we’ll refetch on the next scrape.
## Unknown values:
- null or thrown errors set the gauge to `NaN` so Prometheus won’t treat
it as zero.
## Compatibility:
- Backward compatible. Existing uses of `createGauge` are unchanged.
If a user-supplied `collect` exists, it still runs after the TTL logic
(can overwrite the value by design).
- API remains the same for the returned wrapper: `{ gauge, labels,
reset, set }`.
Implements batching of data points in the archived:created chart: when
there's 12 or more weeks of data, batch data into batches of 4 weeks at
a time. When we batch data, we also switch the labeling to be
month-based and auto-generated (cf the inline comment with more
details).
<img width="798" height="317" alt="image"
src="https://github.com/user-attachments/assets/068ee528-a6d6-4aaf-ac81-c729c2c813d1"
/>
The current implementation batches into groups of 4 weeks, but this can
easily be parameterized to support arbitrary batch sizes.
Because of the batching, we also now need to adjust the tooltip title in
those cases. This is handled by a callback.
## About the changes
This would allow users to add test statements to protect from concurrent
modifications. From
https://github.com/orgs/Unleash/discussions/10707#discussioncomment-14602784
E.g.
If you had this feature flag configuration
```
{
"name": "flexibleRollout",
"constraints": [
{
"contextName": "a",
"operator": "IN",
"values": [
"100", "200", "300", "400", "500"
],
"caseInsensitive": false,
"inverted": false
}
],
"parameters": {
"rollout": "100",
"stickiness": "default",
"groupId": "api-access"
},
"variants": [],
"segments": [
122
],
"disabled": false
}
```
And you'd like to remove the value 300 from the constraints, you'd have
to first get the current values and then PATCH the strategy with the
following body:
```
[{ "op": "remove", "path": "/constraints/0/values/2" }]
```
This could fail in case of concurrent modifications (e.g. if someone
removed the value "100", then the index to remove "300" will no longer
be 2).
With the test operation, you'd be able to add a protection mechanism to
validate that the value at index 2 is still 300:
```
[
{ "op": "test", "path": "/constraints/0/values/2", "value": "300" },
{ "op": "remove", "path": "/constraints/0/values/2" }
]
```
If the test fails, the remove operation will not be applied.
I've tested this locally and works as expected:
1. If the value is still 300, it will remove it
2. The operation will fail if the value is no longer 300 because of
another change.
We could have users updated at the exact same time, so we need to
include that timestamp in the next fetch to avoid missing users, but
also include the latest user id so we can exclude the ones already
fetched.
The following test shows how to process pages with this new method:
c03df86ee0/src/lib/features/users/user-updates-read-model.test.ts (L39-L65)
## About the changes
In our code, we're not expecting constraints to be null:
https://github.com/search?q=repo%3AUnleash%2Funleash%20jsonb_array_elements(constraints)&type=code
It's unlikely to get a null value in constraints when using our API or
UI, but there might be cases where this can happen, as we saw in our
logs:
```
error: select "context_fields"."name", "context_fields"."description", "context_fields"."stickiness", "context_fields"."sort_order", "context_fields"."legal_values", "context_fields"."created_at", COUNT(DISTINCT CASE
WHEN features.archived_at IS NULL
THEN feature_strategies.project_name
END) AS used_in_projects, COUNT(DISTINCT CASE
WHEN features.archived_at IS NULL
THEN feature_strategies.feature_name
END) AS used_in_features from "context_fields" LEFT JOIN feature_strategies ON EXISTS (
SELECT 1
FROM jsonb_array_elements(feature_strategies.constraints) AS elem
WHERE elem ->> 'contextName' = context_fields.name
) left join "features" on "features"."name" = "feature_strategies"."feature_name" group by "context_fields"."name", "context_fields"."description", "context_fields"."stickiness", "context_fields"."sort_order", "context_fields"."created_at" order by "name" asc - cannot extract elements from a scalar
```
which is likely due to:
`jsonb_array_elements(feature_strategies.constraints)` with null
constraints
For this reason, it seems reasonable to enforce the constraint at the
database level.
Archived flags link will now it will show up on the right side, next to
import/export, which makes it more in line with flags overview.
Co-authored-by: Thomas Heartman <thomas@getunleash.io>
This PR cleans up the etagVariant flag. These changes were automatically
generated by AI and should be reviewed carefully.
Fixes#10711
## 🧹 AI Flag Cleanup Summary
This PR removes the `etagVariant` feature flag, making the versioned
ETag format
(`v2`) the default and only behavior for the client features API.
### 🚮 Removed
- **Feature Flag**
- Removed the `etagVariant` flag definition from `experimental.ts`.
- Removed conditional logic for ETag generation in
`client-feature-toggle.controller.ts`.
- **Testing**
- Removed parameterized tests for both states of the flag in
`feature.optimal304.e2e.test.ts`.
- Removed configuration of the `etagVariant` flag in test setup.
### 🛠 Kept
- **ETag Generation**
- The logic to generate ETags with a version suffix (`v1`) is now the
standard
behavior.
- **Testing**
- Tests have been updated to exclusively assert the presence of the `v1`
suffix in ETags.
### 📝 Why
The `etagVariant` feature flag has been successfully rolled out and is
now
considered complete. By removing the flag, we are simplifying the
codebase by
eliminating conditional paths and making the improved ETag format
permanent.
This change ensures all client API responses for features include a
versioned
ETag, which helps with cache-busting when the ETag format changes in the
future.
---------
Co-authored-by: unleash-bot <194219037+unleash-bot[bot]@users.noreply.github.com>
Co-authored-by: Gastón Fournier <gaston@getunleash.io>
Refactored and simplified code around flag filters, in preparation for
UI improvements. It's split into 2 PRs in order to simplify what needs
to be behind a flag and what doesn't.
- `ExperimentalColumnsMenu` moved to `ColumnsMenu`, old unused
`ColumnsMenu` removed
- Parts of the code moved to `ProjectFeaturesColumnsMenu`
- Moved `FlagCreationButton` to a separate file
- Removed part behind archived flag (`projectOverviewRefactorFeedback`)
Adds filter buttons for filtering between "CRs created by me" and "CRs
where I've been requested as an approver".
The current implementation is fairly simplistic and the buttons are not
connected to the actual table state directly (instead being set up with
their own simple state and onChange hooks), but it covers the simple
scenario. I want to defer a more complex solution until we know we need
it and until we know exactly what we need. The implementation is based
on the lifecycle filters that we have on the project flags page.
The current logic is such that: when you land on the page, there's no
query params in the URL, but the data fetch applies `createdBy:IS<your
user>`. If you switch to "approval requested" (and back again), the URL
will reflect this.
For reference, the github workflow works like this, where each URL has a
set of default filters, e.g.:
- `/pulls`: `is:open is:pr assignee:thomasheartman archived:false`
- `/pulls/review-requested`: `is:open is:pr
review-requested:thomasheartman archived:false`
But if you change the default filters or add new ones, the URL will
update to `pulls?<query-string>` (e.g.
`/pulls?q=is%3Aopen+is%3Apr+review-requested%3Athomasheartman+archived%3Atrue`)
So this takes a similar approach, but better suited to the way we do
tables in general.
Rendered:
<img width="1816" height="791" alt="image"
src="https://github.com/user-attachments/assets/60935900-488d-4ca9-b110-39f3568a08a6"
/>
<img width="1855" height="329" alt="image"
src="https://github.com/user-attachments/assets/5e865a2e-8fdc-41ab-ba38-bbe6776d04ad"
/>