https://linear.app/unleash/issue/2-3932/cloned-environments-enable-disabled-strategies-unexpectedly
Cloning environments didn't work as expected. This fixes a few of
issues:
- Disabled strategies remain disabled after cloning
- All strategy properties are cloned (including e.g. title)
- Strategy cloning respects the selected projects
- Release plans and their milestones are now correctly cloned
Fix for
[https://github.com/Unleash/unleash/security/code-scanning/81](https://github.com/Unleash/unleash/security/code-scanning/81)
To prevent information exposure through stack traces, ensure that the
HTTP response sent to clients contains only sanitized, generic error
information, such as a status code and a simple message. Internal
details (including stack traces, error types, or internal error codes)
should not be sent to the client. These can be safely logged on the
server for debugging.
**The fix:**
- Do not return the entire `finalError` object as JSON to the client, as
it may include fields like `stack` or `internalMessage`.
- Instead, return only a subset of fields that are safe to expose to the
user, in this case just `message` .
- Log the full error and any debugging details using the server-side
logger **as currently done**.
---
_Suggested fixes powered by Copilot Autofix. Review carefully before
merging._
---------
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
# Summary
Add optional lazy collection with TTL to our createGauge wrapper,
allowing a gauge to fetch its value on scrape and cache it for a
configurable duration. This lets us register a collect function directly
at gauge declaration without changing existing call sites or behavior.
We're experimenting with this, reason why we're only applying the
solution to `users_total` and will evaluate afterwards.
# Problem
- Some gauges should be computed on scrape (e.g., expensive or external
lookups) instead of being pushed continuously.
- Our current `createGauge` helper doesn’t make it easy to attach a
`collect` with caching. Each caller has to reimplement timing, caching,
and error handling.
- This leads to repeated costly work, inconsistent handling of unknown
values, and boilerplate.
# What changed
- `createGauge` now accepts two optional options in addition to the
usual prom-client options:
- `fetchValue?: () => Promise<number | null>`
- `ttlMs?: number`
- When `fetchValue` is provided:
- We install a `collect` that fetches on scrape.
- Successful values are cached for `ttlMs` milliseconds (if `ttlMs` >
0).
- If `ttlMs` is 0 or omitted, we fetch on every scrape.
- If `fetchValue` returns null or throws, we set `NaN` (indicates
`"unknown"`).
# Behavior details
## Caching:
- A value is “fresh” when successfully fetched within `ttlMs`.
- Only numeric successes are cached. null and errors are not cached;
we’ll refetch on the next scrape.
## Unknown values:
- null or thrown errors set the gauge to `NaN` so Prometheus won’t treat
it as zero.
## Compatibility:
- Backward compatible. Existing uses of `createGauge` are unchanged.
If a user-supplied `collect` exists, it still runs after the TTL logic
(can overwrite the value by design).
- API remains the same for the returned wrapper: `{ gauge, labels,
reset, set }`.
## About the changes
This would allow users to add test statements to protect from concurrent
modifications. From
https://github.com/orgs/Unleash/discussions/10707#discussioncomment-14602784
E.g.
If you had this feature flag configuration
```
{
"name": "flexibleRollout",
"constraints": [
{
"contextName": "a",
"operator": "IN",
"values": [
"100", "200", "300", "400", "500"
],
"caseInsensitive": false,
"inverted": false
}
],
"parameters": {
"rollout": "100",
"stickiness": "default",
"groupId": "api-access"
},
"variants": [],
"segments": [
122
],
"disabled": false
}
```
And you'd like to remove the value 300 from the constraints, you'd have
to first get the current values and then PATCH the strategy with the
following body:
```
[{ "op": "remove", "path": "/constraints/0/values/2" }]
```
This could fail in case of concurrent modifications (e.g. if someone
removed the value "100", then the index to remove "300" will no longer
be 2).
With the test operation, you'd be able to add a protection mechanism to
validate that the value at index 2 is still 300:
```
[
{ "op": "test", "path": "/constraints/0/values/2", "value": "300" },
{ "op": "remove", "path": "/constraints/0/values/2" }
]
```
If the test fails, the remove operation will not be applied.
I've tested this locally and works as expected:
1. If the value is still 300, it will remove it
2. The operation will fail if the value is no longer 300 because of
another change.