https://linear.app/unleash/issue/2-3932/cloned-environments-enable-disabled-strategies-unexpectedly
Cloning environments didn't work as expected. This fixes a few of
issues:
- Disabled strategies remain disabled after cloning
- All strategy properties are cloned (including e.g. title)
- Strategy cloning respects the selected projects
- Release plans and their milestones are now correctly cloned
Fix for
[https://github.com/Unleash/unleash/security/code-scanning/81](https://github.com/Unleash/unleash/security/code-scanning/81)
To prevent information exposure through stack traces, ensure that the
HTTP response sent to clients contains only sanitized, generic error
information, such as a status code and a simple message. Internal
details (including stack traces, error types, or internal error codes)
should not be sent to the client. These can be safely logged on the
server for debugging.
**The fix:**
- Do not return the entire `finalError` object as JSON to the client, as
it may include fields like `stack` or `internalMessage`.
- Instead, return only a subset of fields that are safe to expose to the
user, in this case just `message` .
- Log the full error and any debugging details using the server-side
logger **as currently done**.
---
_Suggested fixes powered by Copilot Autofix. Review carefully before
merging._
---------
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Makes it so that the default width calculation for the highlighter
plugin attempts to first use the number of entries in the data set for
the x axis, but falls back to using the number of categories if that is
not available. This is probably the more "correct" / "do what I mean"
approach to setting highlighter width.
Fixes an issue where the highlight would be too wide if the labels were
set to "auto" instead of "data".
The width function is only used in two places (the archived:created
graph and the network graph). Both of those work fine with the new
update.
Before:
<img width="831" height="401" alt="image"
src="https://github.com/user-attachments/assets/8487b95f-cc49-4ff6-a519-7f79e1048eed"
/>
After:
<img width="886" height="378" alt="image"
src="https://github.com/user-attachments/assets/ad2102cb-3342-4a28-aa54-6b31caa495e1"
/>
# Summary
Add optional lazy collection with TTL to our createGauge wrapper,
allowing a gauge to fetch its value on scrape and cache it for a
configurable duration. This lets us register a collect function directly
at gauge declaration without changing existing call sites or behavior.
We're experimenting with this, reason why we're only applying the
solution to `users_total` and will evaluate afterwards.
# Problem
- Some gauges should be computed on scrape (e.g., expensive or external
lookups) instead of being pushed continuously.
- Our current `createGauge` helper doesn’t make it easy to attach a
`collect` with caching. Each caller has to reimplement timing, caching,
and error handling.
- This leads to repeated costly work, inconsistent handling of unknown
values, and boilerplate.
# What changed
- `createGauge` now accepts two optional options in addition to the
usual prom-client options:
- `fetchValue?: () => Promise<number | null>`
- `ttlMs?: number`
- When `fetchValue` is provided:
- We install a `collect` that fetches on scrape.
- Successful values are cached for `ttlMs` milliseconds (if `ttlMs` >
0).
- If `ttlMs` is 0 or omitted, we fetch on every scrape.
- If `fetchValue` returns null or throws, we set `NaN` (indicates
`"unknown"`).
# Behavior details
## Caching:
- A value is “fresh” when successfully fetched within `ttlMs`.
- Only numeric successes are cached. null and errors are not cached;
we’ll refetch on the next scrape.
## Unknown values:
- null or thrown errors set the gauge to `NaN` so Prometheus won’t treat
it as zero.
## Compatibility:
- Backward compatible. Existing uses of `createGauge` are unchanged.
If a user-supplied `collect` exists, it still runs after the TTL logic
(can overwrite the value by design).
- API remains the same for the returned wrapper: `{ gauge, labels,
reset, set }`.
Implements batching of data points in the archived:created chart: when
there's 12 or more weeks of data, batch data into batches of 4 weeks at
a time. When we batch data, we also switch the labeling to be
month-based and auto-generated (cf the inline comment with more
details).
<img width="798" height="317" alt="image"
src="https://github.com/user-attachments/assets/068ee528-a6d6-4aaf-ac81-c729c2c813d1"
/>
The current implementation batches into groups of 4 weeks, but this can
easily be parameterized to support arbitrary batch sizes.
Because of the batching, we also now need to adjust the tooltip title in
those cases. This is handled by a callback.
## About the changes
This would allow users to add test statements to protect from concurrent
modifications. From
https://github.com/orgs/Unleash/discussions/10707#discussioncomment-14602784
E.g.
If you had this feature flag configuration
```
{
"name": "flexibleRollout",
"constraints": [
{
"contextName": "a",
"operator": "IN",
"values": [
"100", "200", "300", "400", "500"
],
"caseInsensitive": false,
"inverted": false
}
],
"parameters": {
"rollout": "100",
"stickiness": "default",
"groupId": "api-access"
},
"variants": [],
"segments": [
122
],
"disabled": false
}
```
And you'd like to remove the value 300 from the constraints, you'd have
to first get the current values and then PATCH the strategy with the
following body:
```
[{ "op": "remove", "path": "/constraints/0/values/2" }]
```
This could fail in case of concurrent modifications (e.g. if someone
removed the value "100", then the index to remove "300" will no longer
be 2).
With the test operation, you'd be able to add a protection mechanism to
validate that the value at index 2 is still 300:
```
[
{ "op": "test", "path": "/constraints/0/values/2", "value": "300" },
{ "op": "remove", "path": "/constraints/0/values/2" }
]
```
If the test fails, the remove operation will not be applied.
I've tested this locally and works as expected:
1. If the value is still 300, it will remove it
2. The operation will fail if the value is no longer 300 because of
another change.
We could have users updated at the exact same time, so we need to
include that timestamp in the next fetch to avoid missing users, but
also include the latest user id so we can exclude the ones already
fetched.
The following test shows how to process pages with this new method:
c03df86ee0/src/lib/features/users/user-updates-read-model.test.ts (L39-L65)