Instead of throwing an error when the project doesn't exist, we say that
the names are valid, because we have nothing to say that they're not.
Presumably there is already something in place to prevent you from
importing into a non-existent project.
## About the changes
This feature allows our Enterprise customers to configure banners to be
displayed on their Unleash instance for all their users to see and
interact with. Previously known as "internal message banners".
Optimizations:
1. Removed extra round trip to database to count environments
2. Removed extra round trip to database to count features
Fixes:
Currently, we were using a very optimistic query to set correct limit
and offset. This breaks as soon we we join tags.
` query = query
.select(selectColumns)
.limit(limit * environmentCount)
.offset(offset * environmentCount);`
The solution was to use common table expressions, so we could count and
rank features.
Rename event to SCHEDULED_CHANGE_REQUEST_EXECUTED
This event will be triggered when the executor runs a scheduled change
request.
The ChangeRequestApplied event will remain as is (going out to project
members - but will have a scheduled = true property in the data if it
scheduled.
This new event will fire on execution of the schedule and have a result
= "failed" | "succeeded" property.
Because notifications are tied to events, this notification will go out
to the creator and the applier
---------
Signed-off-by: andreas-unleash <andreas@getunleash.ai>
This PR updates the segment usage counting to also include segment usage
in pending change requests.
The changes include:
- Updating the schema to explicitly call out that change request usage
is included.
- Adding two tests to verify the new features
- Writing an alternate query to count this data
Specifically, it'll update the part of the UI that tells you how many
places a segment is used:
![image](https://github.com/Unleash/unleash/assets/17786332/a77cf932-d735-4a13-ae43-a2840f7106cb)
## Implementation
Implementing this was a little tricky. Previously, we'd just count
distinct instances of feature names and project names on the
feature_strategy table. However, to merge this with change request data,
we can't just count existing usage and change request usage separately,
because that could cause duplicates.
Instead of turning this into a complex DB query, I've broken it up into
a few separate queries and done the merging in JS. I think that's more
readable and it was easier to reason about.
Here's the breakdown:
1. Get the list of pending change requests. We need their IDs and their
project.
2. Get the list of updateStrategy and addStrategy events that have
segment data.
3. Take the result from step 2 and turn it into a dictionary of segment
id to usage data.
4. Query the feature_strategy_segment and feature_strategies table, to
get existing segment usage data
5. Fold that data into the change request data.
6. Perform the preexisting segment query (without counting logic) to get
other segment data
7. Enrich the results of the query from step 2 with usage data.
## Discussion points
I feel like this could be done in a nicer way, so any ideas on how to
achieve that (whether that's as a db query or just breaking up the code
differently) is very welcome.
Second, using multiple queries obviously yields more overhead than just
a single one. However, I do not think this is in the hot path, so I
don't consider performance to be critical here, but I'm open to hearing
opposing thoughts on this of course.
https://linear.app/unleash/issue/SR-169/ticket-1107-project-feature-flag-limit-is-not-correctly-updatedFixes#5315, an issue where it would not be possible to set an empty
flag limit.
This also fixes the UI behavior: Before, when the flag limit field was
emptied, it would disappear from the UI.
I'm a bit unsure of the original intent of the `(data.defaultStickiness
!== undefined || data.featureLimit !== undefined)` condition. We're in
an update method, triggered by a PUT endpoint - I think it's safe to
assume that we'll always want to set these values to whatever they come
as, we just need to convert them to `null` in case they are not present
(i.e. `undefined`).
This fixes an edge case not caught originally in
https://github.com/Unleash/unleash/pull/5304 - When creating a new
segment on the global level:
- There is no `projectId`, either in the params or body
- The `UPDATE_PROJECT_SEGMENT` is still a part of the permissions
checked on the endpoint
- There is no `id` on the params
This made it so that we would run `segmentStore.get(id)` with an
undefined `id`, causing issues.
The fix was simply checking for the presence of `params.id` before
proceeding.
This PR hooks up the changes introduced in #5301 to the API and puts
them behind a feature flag. A new test has been added and the test setup
has been slightly tweaked to allow this test.
When the flag is enabled, the API will now not let you delete a segment
that's used in any active CRs.
https://linear.app/unleash/issue/SR-164/ticket-1106-user-with-createedit-project-segment-is-not-able-to-edit-a
Fixes a bug where the `UPDATE_PROJECT_SEGMENT` permission is not
respected, both on the UI and on the API. The original intention was
stated
[here](https://github.com/Unleash/unleash/pull/3346#discussion_r1140434517).
This was easy to fix on the UI, since we were simply missing the extra
permission on the button permission checks.
Unfortunately the API can be tricky. Our auth middleware tries to grab
the `project` information from either the params or body object, but our
`DELETE` method does not contain this information. There is no body and
the endpoint looks like `/admin/segments/:id`, only including the
segment id.
This means that, in the rbac middleware when we check the permissions,
we need to figure out if we're in such a scenario and fetch the project
information from the DB, which feels a bit hacky, but it's something
we're seemingly already doing for features, so at least it's somewhat
consistent.
Ideally what we could do is leave this API alone and create a separate
one for project segments, with endpoints where we would have project as
a param, like so:
`http://localhost:4242/api/admin/projects/:projectId/segments/1`.
This PR opts to go with the quick and hacky solution for now since this
is an issue we want to fix quickly, but this is something that we should
be aware of. I'm also unsure if we want to create a new API for project
segments. If we decide that we want a different solution I don't mind
either adapting this PR or creating a follow up.
This test was flaky because it relied on the order of the array
returned. To make it less flaky, we now turn the array into an object
instead and compare that.
This PR adds a way to tell if a specific segment is being used in any
active change requests. It's the first step towards preventing segments
that are being used in change requests from being deleted.
It does that by checking the db for any unclosed CRs and using those CR
ids to look for "addStrategy" and "updateStrategy" events in the cr
events table.
## Upcoming PRs
This only puts in a way to detect it, but doesn't add that to anything.
That'll be in an upcoming iteration.
The `dataPath` was present (but not in the type) in previous versions of
the
error library that we use. But with the recent major upgrade, it's
been removed and the `instancePath` property has finally come into use.
This PR removes all the handling for the previous property and
replaces it with `instancePath`. Because the `dataPath` used full
stops and the `instancePath` uses slashes, we need to change a little
bit of the handling too.
Switch the express-openapi implementation from our internal fork to the
upstream version. We have upstreamed our changes and a new version has
been released, so this should be the last step before we can retire our
fork.
Because some of the dependencies have been updated since our internal
fork, we also need to update some of our error handling to reflect this.
Expose new interface while also getting rid of unneeded compiler ignores
None of the changes should add new security risks, despite this report:
> Code scanning results / CodeQL Failing after 4s — 2 new alerts
including 2 high severity security vulnerabilities
Not sure what that means, maybe a removed ignore...
Sort the items before inserting them into the database in order to
reduce the chance of deadlocks happening when multiple pods are
inserting at the same time.