Instead of throwing an error when the project doesn't exist, we say that
the names are valid, because we have nothing to say that they're not.
Presumably there is already something in place to prevent you from
importing into a non-existent project.
## About the changes
This feature allows our Enterprise customers to configure banners to be
displayed on their Unleash instance for all their users to see and
interact with. Previously known as "internal message banners".
Optimizations:
1. Removed extra round trip to database to count environments
2. Removed extra round trip to database to count features
Fixes:
Currently, we were using a very optimistic query to set correct limit
and offset. This breaks as soon we we join tags.
` query = query
.select(selectColumns)
.limit(limit * environmentCount)
.offset(offset * environmentCount);`
The solution was to use common table expressions, so we could count and
rank features.
Rename event to SCHEDULED_CHANGE_REQUEST_EXECUTED
This event will be triggered when the executor runs a scheduled change
request.
The ChangeRequestApplied event will remain as is (going out to project
members - but will have a scheduled = true property in the data if it
scheduled.
This new event will fire on execution of the schedule and have a result
= "failed" | "succeeded" property.
Because notifications are tied to events, this notification will go out
to the creator and the applier
---------
Signed-off-by: andreas-unleash <andreas@getunleash.ai>
This PR updates the segment usage counting to also include segment usage
in pending change requests.
The changes include:
- Updating the schema to explicitly call out that change request usage
is included.
- Adding two tests to verify the new features
- Writing an alternate query to count this data
Specifically, it'll update the part of the UI that tells you how many
places a segment is used:
![image](https://github.com/Unleash/unleash/assets/17786332/a77cf932-d735-4a13-ae43-a2840f7106cb)
## Implementation
Implementing this was a little tricky. Previously, we'd just count
distinct instances of feature names and project names on the
feature_strategy table. However, to merge this with change request data,
we can't just count existing usage and change request usage separately,
because that could cause duplicates.
Instead of turning this into a complex DB query, I've broken it up into
a few separate queries and done the merging in JS. I think that's more
readable and it was easier to reason about.
Here's the breakdown:
1. Get the list of pending change requests. We need their IDs and their
project.
2. Get the list of updateStrategy and addStrategy events that have
segment data.
3. Take the result from step 2 and turn it into a dictionary of segment
id to usage data.
4. Query the feature_strategy_segment and feature_strategies table, to
get existing segment usage data
5. Fold that data into the change request data.
6. Perform the preexisting segment query (without counting logic) to get
other segment data
7. Enrich the results of the query from step 2 with usage data.
## Discussion points
I feel like this could be done in a nicer way, so any ideas on how to
achieve that (whether that's as a db query or just breaking up the code
differently) is very welcome.
Second, using multiple queries obviously yields more overhead than just
a single one. However, I do not think this is in the hot path, so I
don't consider performance to be critical here, but I'm open to hearing
opposing thoughts on this of course.
https://linear.app/unleash/issue/SR-169/ticket-1107-project-feature-flag-limit-is-not-correctly-updatedFixes#5315, an issue where it would not be possible to set an empty
flag limit.
This also fixes the UI behavior: Before, when the flag limit field was
emptied, it would disappear from the UI.
I'm a bit unsure of the original intent of the `(data.defaultStickiness
!== undefined || data.featureLimit !== undefined)` condition. We're in
an update method, triggered by a PUT endpoint - I think it's safe to
assume that we'll always want to set these values to whatever they come
as, we just need to convert them to `null` in case they are not present
(i.e. `undefined`).
This fixes an edge case not caught originally in
https://github.com/Unleash/unleash/pull/5304 - When creating a new
segment on the global level:
- There is no `projectId`, either in the params or body
- The `UPDATE_PROJECT_SEGMENT` is still a part of the permissions
checked on the endpoint
- There is no `id` on the params
This made it so that we would run `segmentStore.get(id)` with an
undefined `id`, causing issues.
The fix was simply checking for the presence of `params.id` before
proceeding.
This PR hooks up the changes introduced in #5301 to the API and puts
them behind a feature flag. A new test has been added and the test setup
has been slightly tweaked to allow this test.
When the flag is enabled, the API will now not let you delete a segment
that's used in any active CRs.
https://linear.app/unleash/issue/SR-164/ticket-1106-user-with-createedit-project-segment-is-not-able-to-edit-a
Fixes a bug where the `UPDATE_PROJECT_SEGMENT` permission is not
respected, both on the UI and on the API. The original intention was
stated
[here](https://github.com/Unleash/unleash/pull/3346#discussion_r1140434517).
This was easy to fix on the UI, since we were simply missing the extra
permission on the button permission checks.
Unfortunately the API can be tricky. Our auth middleware tries to grab
the `project` information from either the params or body object, but our
`DELETE` method does not contain this information. There is no body and
the endpoint looks like `/admin/segments/:id`, only including the
segment id.
This means that, in the rbac middleware when we check the permissions,
we need to figure out if we're in such a scenario and fetch the project
information from the DB, which feels a bit hacky, but it's something
we're seemingly already doing for features, so at least it's somewhat
consistent.
Ideally what we could do is leave this API alone and create a separate
one for project segments, with endpoints where we would have project as
a param, like so:
`http://localhost:4242/api/admin/projects/:projectId/segments/1`.
This PR opts to go with the quick and hacky solution for now since this
is an issue we want to fix quickly, but this is something that we should
be aware of. I'm also unsure if we want to create a new API for project
segments. If we decide that we want a different solution I don't mind
either adapting this PR or creating a follow up.
This test was flaky because it relied on the order of the array
returned. To make it less flaky, we now turn the array into an object
instead and compare that.
This PR adds a way to tell if a specific segment is being used in any
active change requests. It's the first step towards preventing segments
that are being used in change requests from being deleted.
It does that by checking the db for any unclosed CRs and using those CR
ids to look for "addStrategy" and "updateStrategy" events in the cr
events table.
## Upcoming PRs
This only puts in a way to detect it, but doesn't add that to anything.
That'll be in an upcoming iteration.
The `dataPath` was present (but not in the type) in previous versions of
the
error library that we use. But with the recent major upgrade, it's
been removed and the `instancePath` property has finally come into use.
This PR removes all the handling for the previous property and
replaces it with `instancePath`. Because the `dataPath` used full
stops and the `instancePath` uses slashes, we need to change a little
bit of the handling too.
Switch the express-openapi implementation from our internal fork to the
upstream version. We have upstreamed our changes and a new version has
been released, so this should be the last step before we can retire our
fork.
Because some of the dependencies have been updated since our internal
fork, we also need to update some of our error handling to reflect this.
Expose new interface while also getting rid of unneeded compiler ignores
None of the changes should add new security risks, despite this report:
> Code scanning results / CodeQL Failing after 4s — 2 new alerts
including 2 high severity security vulnerabilities
Not sure what that means, maybe a removed ignore...
Sort the items before inserting them into the database in order to
reduce the chance of deadlocks happening when multiple pods are
inserting at the same time.
For a while we ran a diffing algorithm in production to verify that the
results of the refactor did not differ from the previous results. As the
experiment has run it's course and new attributes have been added on top
of the new flow, this will remove the logging and associated code.
`EXECUTE FUNCTION` was introduced in Postgres v11. In Postgres v10 the
syntax was `EXECUTE PROCEDURE`. This fix changes the syntax to `EXECUTE
PROCEDURE`, which is perfectly fine sense our function does not return
anything.
### What
This PR makes the rate limit for user creation and simple login (our
password based login) configurable in the same way you can do
metricsRateLimiting.
### Worth noting
In addition this PR adds a `rate_limit{endpoint, method}` prometheus
gauge, which gets the data from the UnleashConfig.
This PR adds a db table for CR schedules. The table has two columns:
1. `change_request` :: This acts as both a foreign key and as the
primary key for this table.
2. `scheduled_at` :: When the change is scheduled to be applied.
We could use a separate ID column for these rows and put a `unique`
constraint on the `change_request` FK, but I don't think that adds any
more value. However, I'm happy to hear other thoughts around it.
As #4475 says, MD5 is not available in secure places anymore. This PR
swaps out gravatar-url with an inline function using crypto:sha256 which
is FIPS-140-2 compliant. Since we only used this method for generating
avatar URLs the extra customization wasn't needed and we could hard code
the URL parameters.
fixes: Linear
https://linear.app/unleash/issue/SR-112/gh-support-swap-out-gravatar-url-libcloses: #4475
To prepare for 5.6 GA,
I've done a find through both Frontend and Backend here to remove the
usages of the flag. Seems like the flag was only in use in the frontend.
@nunogois can you confirm?
This PR adds a cleanup job that removes unknown feature flags from
last_seen_at_metrics table every 24 hours since we no longer have a
foreign key on the name column in the features table.
## About the changes
This fixes a bug updating a project, when optional data
(defaultStickiness and featureLimit are not part of the payload).
The problem happens due to:
1. ProjectController does not use the type: UpdateProjectSchema for the
request body (will be addressed in another PR in unleash-enterprise)
2. Project Store interface does not match UpdateProjectSchema (but it
relies on accepting `additional properties: true`, which is what we
agreed on for input)
3. Feature limit is not defined in UpdateProjectSchema (also addressed
in the other PR)
https://linear.app/unleash/issue/2-1531/rename-message-banners-to-banners
This renames "message banners" to "banners".
I also added support for external banners coming from a `banner` flag
instead of only `messageBanner` flag, so we can eventually migrate to
the new one in the future if we want.
## About the changes
This makes sure that projects have at least one owner, either a group or
a user. This is to prevent accidentally losing access to a project.
We check this when removing a user/group or when changing the role of a
user/group
**Note**: We can still leave a group empty as the only owner of the
project, but that's okay because we can still add more users to the
group
Sort array items before running compare. Feature flag certain properties
of strategy that were previously not present in the /api/admin/features
endpoint.
### What
The heaviest requests we serve are the register and metrics POSTs from
our SDKs/clients.
This PR adds ratelimiting to /api/client/register, /api/client/metrics,
/api/frontend/register and /api/frontend/metrics with a default set to
6000 requests per minute (or 100 rps) for each of the endpoints.
It will be overrideable by the environment variables documented.
### Points of discussion
@kwasniew already suggested using featuretoggles with variants to
control the rate per clientId. I struggled to see if we could
dynamically update the middleware after initialisation, so this attempt
will need a restart of the pod to update the request limit.
## About the changes
This small improvement aims to help developers when instantiating
services. They need to be constructed without injecting services or
stores created elsewhere so they can be bound to the same transactional
scope.
This suggests that you need to create the services and stores on your
own
This fixes a return type error by changing the logic of
`extractUsernameFromUser` to never return undefined.
In the previous code, `user` could be truthy, but that doesn't mean
`email` or `username` were defined. This assumes we always fallback to
"unknown" in those scenarios.
This PR is the first step in separating the client and admin stores.
Currently our feature toggle services uses the client store to serve
multiple purposes.
Admin API uses the feature toggle service to serve both the feature
toggle list and playground features, while the client API uses the
feature toggle service to serve client features. The admin API can
change often and have very different requirements than the client API,
which changes infrequently and generally keeps the same stable structure
for long periods of time. This architecture is error prone, because when
you need to make changes to the admin API, you can very easily affect
the client API.
I aim to put up a stone wall between the two APIs. Complete separation
between the two APIs, at the cost of some duplication.
In this PR I have created a feature oriented architecture for client
features and disconnected the client API from the feature toggle
service. It now goes through it's own service to it's own store. For
feature toggle service I have duplicated and replaced the functionality
that serves /api/admin/features, I have kept a lot of the ugliness in
the code and haven't removed anything in order to avoid breaking
changes.
Next steps:
* Move playground to admin API
* Remove client-feature-toggle-store from feature-toggle-service
## About the changes
This splits the interfaces for import and export, especially because the
import functionality has to be replaced in enterprise repo.
This is a breaking change because of the service renames, but I'll have
the PR for the other repository ready so we reduce the time to fix. I
intentionally avoided doing it backward compatible because of time.
https://linear.app/unleash/issue/2-1494/re-order-message-banners
- Re-orders message banners to fit into this logic:
>1. Maintenance banner
>2. External message banner(s) - Most likely coming from Unleash
>3. Internal message banner(s)
- Renames the feature flag to better reflect the feature behavior;
- Lays a basic skeleton structure for this new feature;
As part of more telemetry on the usage of Unleash.
This PR adds a new `stat_` prefixed table as well as a trigger on the
events table trigger on each insert to increment a counter per
environment per day.
The trigger will trigger on every insert into the events base, but will
filter and only increment the counter for events that actually have the
environment set. (there are events, like user-created, that does not
relate to a specific environment).
Bit wary on this, but since we truncate down to row per (day,
environment) combo, finding conflict and incrementing shouldn't take too
long here.
@ivarconr was it something like this you were considering?
This PR cleans up and refactors the feature-strategy-store method
getFeatureOverview to join on the new table and attempts to make the
function more readable by extracting some of the logic into separate
functions. Keeping the LastSeenMapper for now in case there is a reason
to use it for the other endpoints.
## About the changes
Segment changes in predata and data columns were both showing the new
segments list
Adds formatting of what's changed with segments to feature strategy
update events, so when a user changes the strategy from using
constraints, to using segments instead, it's communicated in event
updates
results in:
admin updated
[sample-toggle](http://localhost/projects/default/features/sample-toggle)
in project [default](http://localhost/projects/default) by updating
strategy Sample Strategy in development constraints from [userId is one
of (1,2,3)] to empty set of constraints; segments from empty set of
segments to (1)
Closes #
#4912
### Important files
- `src/lib/services/feature-toggle-service.ts` - Segment changes in
preData and data
- `src/lib/addons/feature-event-formatter-md.ts` - Formatting segments
## Discussion points
This is an SR least effort PR - we should plan a task where we look at
how to render this list of segments in a more comprehensible way (it's
just rendering ids now)
Fixes an issue where SSO group sync would delete a syncable group that a
user was manually added to
## Discussion points
Is this the longterm fix for this? Or would we want another column in
the mapping table for future-proofing this?
## About the changes
This transactional implementation decorates a service with a
transactional method that removes the need to start transactions in the
method using the service.
This is a gradual rollout with a feature toggle, just because
transactions are not easy.
https://linear.app/unleash/issue/2-1253/add-support-for-more-events-in-the-slack-app-integration
Adds support for a lot more events in our integrations. Here is how the
full list looks like:
- ADDON_CONFIG_CREATED
- ADDON_CONFIG_DELETED
- ADDON_CONFIG_UPDATED
- API_TOKEN_CREATED
- API_TOKEN_DELETED
- CHANGE_ADDED
- CHANGE_DISCARDED
- CHANGE_EDITED
- CHANGE_REQUEST_APPLIED
- CHANGE_REQUEST_APPROVAL_ADDED
- CHANGE_REQUEST_APPROVED
- CHANGE_REQUEST_CANCELLED
- CHANGE_REQUEST_CREATED
- CHANGE_REQUEST_DISCARDED
- CHANGE_REQUEST_REJECTED
- CHANGE_REQUEST_SENT_TO_REVIEW
- CONTEXT_FIELD_CREATED
- CONTEXT_FIELD_DELETED
- CONTEXT_FIELD_UPDATED
- FEATURE_ARCHIVED
- FEATURE_CREATED
- FEATURE_DELETED
- FEATURE_ENVIRONMENT_DISABLED
- FEATURE_ENVIRONMENT_ENABLED
- FEATURE_ENVIRONMENT_VARIANTS_UPDATED
- FEATURE_METADATA_UPDATED
- FEATURE_POTENTIALLY_STALE_ON
- FEATURE_PROJECT_CHANGE
- FEATURE_REVIVED
- FEATURE_STALE_OFF
- FEATURE_STALE_ON
- FEATURE_STRATEGY_ADD
- FEATURE_STRATEGY_REMOVE
- FEATURE_STRATEGY_UPDATE
- FEATURE_TAGGED
- FEATURE_UNTAGGED
- GROUP_CREATED
- GROUP_DELETED
- GROUP_UPDATED
- PROJECT_CREATED
- PROJECT_DELETED
- SEGMENT_CREATED
- SEGMENT_DELETED
- SEGMENT_UPDATED
- SERVICE_ACCOUNT_CREATED
- SERVICE_ACCOUNT_DELETED
- SERVICE_ACCOUNT_UPDATED
- USER_CREATED
- USER_DELETED
- USER_UPDATED
I added the events that I thought were relevant based on my own
discretion. Know of any event we should add? Let me know and I'll add it
🙂
For now I only added these events to the new Slack App integration, but
we can add them to the other integrations as well since they are now
supported.
The event formatter was refactored and changed quite a bit in order to
make it easier to maintain and add new events in the future. As a
result, events are now posted with different text. Do we consider this a
breaking change? If so, I can keep the old event formatter around,
create a new one and only use it for the new Slack App integration.
I noticed we don't have good 404 behaviors in the UI for things that are
deleted in the meantime, that's why I avoided some links to specific
resources (like feature strategies, integration configurations, etc),
but we could add them later if we improve this.
This PR also tries to add some consistency to the the way we log events.
## About the changes
In our staging setup, we create ad-hoc environments and import Unleash
state from production. After unleash is deployed on such environment,
the import job kicks in and feeds Unleash instance with feature flags
from production.
Between Unleash being up and running and the import job running, some
applications start polling Unleash. They get an empty feature toggle
list with `meta.revisionId=0`. Then apps use this as part of `eTag`
header in subsequent requests. Even though after import Unleash server
finally has toggles to serve, it doesn't because it calculates _max
revision id_ based on toggle updates (not null `feature_name` column in
query) or `SEGMENT_UPDATED`.
This change adds an extra condition to query so feature toggles import
is considered something that should invalidate the cache.
This commit changes our linter/formatter to biome (https://biomejs.dev/)
Causing our prehook to run almost instantly, and our "yarn lint" task to
run in sub 100ms.
Some trade-offs:
* Biome isn't quite as well established as ESLint
* Are we ready to install a different vscode plugin (the biome plugin)
instead of the prettier plugin
The configuration set for biome also has a set of recommended rules,
this is turned on by default, in order to get to something that was
mergeable I have turned off a couple the rules we seemed to violate the
most, that we also explicitly told eslint to ignore.
## About the changes
This fixes a bunch of openHandles from our tests
I've used this script to find out the ones that leave them:
`find src -name "*.test.ts" -printf "%f\n" | xargs -i sh -c "echo =====
{} && yarn test {}"`
If there's an issue, the script will halt and the last filename will be
the one that has to be fixed.
Each commit fixes one problem so it's easy to review
## About the changes
Add partial index on events by announced. This should help avoid `Seq
Scan on events` when the majority of events are announced=true
---
Co-authored-by: Ivar Østhus <ivar@getunleash.io>
Co-authored-by: Gard Rimestad <gard@getunleash.io>
## About the changes
When the events table is large we might be doing a full table scan
searching for unannounced events. We spotted it due to a performance
alert and confirmed in AWS performance insights
![image](https://github.com/Unleash/unleash/assets/455064/8e815fa3-7a1b-4453-881a-98a148eae119)
The proposal is to limit this operation to 500 events (rule of thumb)
per round
f82ae354eb/src/lib/services/index.ts (L141-L147)
and also ignore the events older than a day (because it seems
reasonable)
## Discussion points
**Idea**: split the `events` table into `recent_events` and
`historical_events`. Recent can be anything from a day/week/month. This
would help with recurrent queries that rely on recent data from the
event's table such as optimal 304 calculation or event this scheduled
task that sends unannounced events.
https://linear.app/unleash/issue/2-1403/consider-refactoring-the-way-tags-are-fetched-for-the-events
This adds 2 methods to `EventService`:
- `storeEvent`;
- `storeEvents`;
This allows us to run event-specific logic inside these methods. In the
case of this PR, this means fetching the feature tags in case the event
contains a `featureName` and there are no tags specified in the event.
This prevents us from having to remember to fetch the tags in order to
store feature-related events except for very specific cases, like the
deletion of a feature - You can't fetch tags for a feature that no
longer exists, so in that case we need to pre-fetch the tags before
deleting the feature.
This also allows us to do any event-specific post-processing to the
event before reaching the DB layer.
In general I think it's also nicer that we reference the event service
instead of the event store directly.
There's a lot of changes and a lot of files touched, but most of it is
boilerplate to inject the `eventService` where needed instead of using
the `eventStore` directly.
Hopefully this will be a better approach than
https://github.com/Unleash/unleash/pull/4729
---------
Co-authored-by: Gastón Fournier <gaston@getunleash.io>
## About the changes
Improvement to the description of the datadog integration. Adds 2
missing event types, removes an event type that is deprecated and about
to be completely removed, adds missing description of extra json headers
and source type name, and adds description for the new configuration
option for JSON body support
---------
Co-authored-by: Nuno Góis <github@nunogois.com>
Co-authored-by: Thomas Heartman <thomas@getunleash.ai>
https://linear.app/unleash/issue/2-1235/docs-slack-app-integration-documentation
This adds a new reference doc for the new Unleash Slack App integration
and marks the previous Slack integration as deprecated.
As a side-effect this PR also fixes an issue where we wouldn't be able
to delete tags with special characters.
---------
Co-authored-by: David Leek <david@getunleash.io>
Co-authored-by: Thomas Heartman <thomas@getunleash.ai>
We love all open-source Unleash users. in 2022 we built the [segment
capability](https://docs.getunleash.io/reference/segments) (v4.13) as an
enterprise feature, simplify life for our customers.
Now it is time to contribute it to the world 🌏
---------
Co-authored-by: Thomas Heartman <thomas@getunleash.io>
## About the changes
Adds optional support for specifying JSON templates for datadog message
payload
![image](https://github.com/Unleash/unleash/assets/707867/eb7c838a-7abf-441e-972e-ddd7ada07efa)
### Important files
<!-- PRs can contain a lot of changes, but not all changes are equally
important. Where should a reviewer start looking to get an overview of
the changes? Are any files particularly important? -->
`frontend/src/component/integrations/IntegrationForm/IntegrationParameters/IntegrationParameter/IntegrationParameterEnableWithDropdown.tsx`
- a new component comprising of a text field and a dropdown menu
`src/lib/addons/datadog.ts` - Where the integration is taking place
## Discussion points
<!-- Anything about the PR you'd like to discuss before it gets merged?
Got any questions or doubts? -->
- Should I have implemented the new component type as a specifiable
addon parameter type in definitions? Felt a bit YAGNI/Premature
- Would like input on naming and the new component etc
## About the changes
This enables us to use names instead of permission ids across all our
APIs at the computational cost of searching for the ids in the DB but
improving the API user experience
## Open topics
We're using methods that are test-only and circumvent our business
logic. This makes our test to rely on assumptions that are not always
true because these assumptions are not validated frequently.
i.e. We are expecting that after removing a permission it's no longer
there, but to test this, the permission has to be there before:
78273e4ff3/src/test/e2e/services/access-service.e2e.test.ts (L367-L375)
But it seems that's not the case.
We'll look into improving this later.
## About the changes
Open API code generator does not get along with `oneOf` alongside
`properties`:
```shell
$ openapi-generator-cli validate -i modified-openapi.json --recommend
Validating spec (modified-openapi.json)
Warnings:
- Schemas defining properties and oneOf are not clearly defined in the OpenAPI
Specification. While our tooling supports this, it may cause issues with other tools.
```
bab67e44e4/modules/openapi-generator/src/main/java/org/openapitools/codegen/validations/oas/OpenApiSchemaValidations.java (L25-L29)
This PR adds a meta-schema rule to validate this and fixes one issue
## About the changes
- `getActiveUsers` is using multiple stores, so it is refactored into
read-model
- Refactored Instance stats service into `features` to co-locate related
code
Closes https://linear.app/unleash/issue/UNL-230/active-users-prometheus
### Important files
`src/lib/features/instance-stats/getActiveUsers.ts`
## Discussion points
`getActiveUsers` is coded less _class-based_ then previous similar
read-models. In one file instead of 3 (read-model interface, fake read
model, sql read model). I find types and functions way more readable,
but I'm ready to refactor it to interfaces and classes if consistency is
more important.
https://linear.app/unleash/issue/2-1393/drop-the-always-post-to-default-channels-field
This drops the "Always post to default channels" field in the Slack App
integration in favor of always posting to the configured channels. This
should simplify the configuration of this integration.
Here's a breakdown of the logic with this change:
- Always post to the configured Slack channels, regardless of tags;
- Tags are still respected. E.g. if we have a configured channel
"channel-1" and a tag for "channel-2", then we post to both channels;
- As channels are optional, if you would like to skip default channels
for certain events and handle everything through tags, you can just
create a new configuration without any default channels;
This also updates the labels and changes the tests to better reflect the
intended behavior.
![image](https://github.com/Unleash/unleash/assets/14320932/a2427bdd-4b92-44b3-9bad-8adb0f94c34d)
https://linear.app/unleash/issue/2-1401/misc-fixes-and-improvements-related-to-the-new-slack-app-integration
This includes multiple UI-related misc fixes and improvements that are
not only related with the new Slack App integration but also
integrations in general.
- Improves the styling in the "how does it work" section;
- Improves the text in the `IntegrationMultiSelector`s;
- Switches "Configure" and "Open" around to match designs;
- Properly handles click event on `IntegrationCardMenu` (fix navigation
on dialog click);
- Fixes titles and contents for "enable/disable" and "delete"
integration dialogs to match designs;
- Updates Slack App integration "how does it work" section to better
reflect the intended behavior;
- Removes redundant alerts after previous point;
- Adds an alert in the old Slack integration configuration warning of
its deprecation and suggesting the new Slack App integration instead;
- Fixes typos;
- Slight refactors;
![image](https://github.com/Unleash/unleash/assets/14320932/17b09742-f00b-4be2-829f-8248ffe67996)
Co-authored by @nicolaesocaciu
Seems like when 2 pods are trying to POST lastSeen metrics, the db gets
into a deadlock state.
This is an attempt to fix the deadlock by sorting the toggleNames before
the update.
The hypothesis is that sorted toggle names will reduce the chance of
working on the same row at the same exact time
Closes #
[1-1382](https://linear.app/unleash/issue/1-1382/order-data-before-updating-the-lastseen-to-reduce-change-of-deadlock)
Signed-off-by: andreas-unleash <andreas@getunleash.ai>
Fix issues uncovered when reviewing integrations list and form.
- YouTube CSP
- Text content and formatting
- Margins
- Update old integration icons
- Fix headers in dark theme
This test enforces that descriptions and examples are cleared if they
are not present in the feature naming data.
This behavior is already present, but it hasn't been encoded in any
tests before.
This PR makes it so that adding a feature naming description when there
is no pattern is disallowed. It also changes the validation for feature
naming slightly so that it can return multiple errors at once.
This was changed a month ago, but it ends up breaking the frontend when
we regenerate the types because the playground needs to have this
structure for now. We'll need to add this back until we can rewrite the
playground to follow the schema.
This PR updates the back-end handling of feature naming patterns to add
implicit leading `^`s and trailing `$`s to the regexes when comparing
them.
It also adds tests for the new behavior, both for new flag names and for
examples.
## Discussion points
Regarding stripping incoming ^ and $: We don't actually need to strip
incoming `^`s and `$`s: it appears that `^^^^^x$$$$$` is just as valid
as `^x$`. As such, we can leave that in. However, if we think it's
better to strip, we can do that too.
Second, I'm considering moving the flag naming validation into a
dedicated module to encapsulate everything a little better. Not sure if
this is the time or where it would live, but open to hearing
suggestions.
This PR adds feature name pattern validation to the import validation
step. When errors occur, they are rendered with all the offending
features, the pattern to match, plus the pattern's description and
example if available.
![image](https://github.com/Unleash/unleash/assets/17786332/69956090-afc6-41c8-8f6e-fb45dfaf0a9d)
To achieve this I've added an extra method to the feature toggle service
that checks feature names without throwing errors (because catching `n`
async errors in a loop became tricky and hard to grasp). This method is
also reused in the existing feature name validation method and handles
the feature enabled chcek.
In doing so, I've also added tests to check that the pattern is applied.
Adds `number` as possible payload type for variant.
Adds a flag to enable the feature
Updates all relevant models and schemas
Adds the option to the UI
Closes: #
[1-1357](https://linear.app/unleash/issue/1-1357/support-number-in-variant-payload)
---------
Signed-off-by: andreas-unleash <andreas@getunleash.ai>
## About the changes
Create api token schema can either provide the field `project` or its
plural: `projects` so the joi validation makes them optional:
2be77fb55e/src/lib/schema/api-token-schema.ts (L20-L24)
This means that when returning the token, the response will either
contain `project` or `projects` depending on what was provided as input.
Therefore we need to make both fields optional in the response
The error message only tells the user that the name doesn't match the
pattern. Because we already show the pattern above the input, we don't
need to repeat it in the error message. This makes for a shorter and
more concise message and better UX.
At the same time, for API users, we can keep the more detailed message
that includes info about the pattern, the example, and the description.
![image](https://github.com/Unleash/unleash/assets/17786332/0492f2ad-810d-435e-bfe6-785afee96892)
This change makes it so that you can save flag naming data on project
creation.
There was a mismatch between flattened and nested data in the creation
endpoint. Likely because we went back and forth a bit when we created
the feature originally.
This PR adds a feature naming pattern description to the project form.
It's rendered as a multi-line input field. The description is also
stored in the db.
This adapts most of @andreas-unleash's PR #4599 with some minor changes
(using description instead of prompt). Actually displaying this data to
the users will come in a later PR.
![image](https://github.com/Unleash/unleash/assets/17786332/b96d2dbb-2b90-4adf-bc83-cdc534c507ea)
Does what it says on the tin, should help with cleaning up
https://github.com/Unleash/unleash/pull/4512 and respective schema
changes.
---------
Co-authored-by: Gastón Fournier <gaston@getunleash.io>
![image](https://github.com/Unleash/unleash/assets/2625371/42364fdb-1ff1-48c4-9756-a145a39e45b9)
## About the changes
<!-- Describe the changes introduced. What are they and why are they
being introduced? Feel free to also add screenshots or steps to view the
changes if they're visual. -->
<!-- Does it close an issue? Multiple? -->
Closes #
<!-- (For internal contributors): Does it relate to an issue on public
roadmap? -->
<!--
Relates to [roadmap](https://github.com/orgs/Unleash/projects/10) item:
#
-->
### Important files
<!-- PRs can contain a lot of changes, but not all changes are equally
important. Where should a reviewer start looking to get an overview of
the changes? Are any files particularly important? -->
## Discussion points
<!-- Anything about the PR you'd like to discuss before it gets merged?
Got any questions or doubts? -->
## About the changes
We found a problem generating the Go SDK client for the tokens API that
makes use of `oneOf`, combined with `allOf`. The generator doesn't know
how to map this type and leaves it as an object. This PR simplifies the spec and therefore the code generated after it
Adds a first iteration of feature flag naming patterns. Currently behind a flag.
Signed-off-by: andreas-unleash <andreas@getunleash.ai>
Co-authored-by: Thomas Heartman <thomas@getunleash.io>
Co-authored-by: andreas-unleash <andreas@getunleash.ai>
Co-authored-by: Thomas Heartman <thomas@getunleash.ai>
This change sorts the tags in the tags file and tests that the list is
sorted alphabetically. This makes it easier to
find tags in the file.
#4580 already introduced a test to check that we have no duplicate
tags, so this isn't as necessary anymore, but it's still nice to have.
It also removes the previous auto-sorting before exporting. This is to
ensure that entries are sorted in the source list. This might seem like
a regression, but it makes it easier to spot near-duplicate tags:
> Despite having the test that validates there are no duplicates, you
can always have Notifications and Notification API by mistake (tags that
mean the same but are different). Keeping the list alphabetically sorted
might help to prevent this before pushing the change to prod. In this
case, we will eventually find out and fix it, so this could be a good
reason to have the list sorted.
## About the changes
Returns Not Found on create and get project api tokens when given a
project id that doesn't exist
## Discussion points
- This is an extra lookup per execution of the endpoint
## About the changes
At https://github.com/Unleash/unleash/pull/4432 we've introduced the
same tags twice which causes issues when generating the api clients.
Closes #2-1350-fix-openapi-duplicated-tags
## About the changes
Returns either 400 or 404 when token isn't found or doesn't match single
project must be provided projectId criteria
<!-- Does it close an issue? Multiple? -->
Closes #
Linear 2-1003
## Discussion points
<!-- Anything about the PR you'd like to discuss before it gets merged?
Got any questions or doubts? -->
Is projects.length > 1 a 400?
https://linear.app/unleash/issue/2-1128/change-the-api-to-support-adding-multiple-roles-to-a-usergroup-on-ahttps://linear.app/unleash/issue/2-1125/be-able-to-fetch-all-roles-for-a-user-in-a-projecthttps://linear.app/unleash/issue/2-1127/adapt-the-ui-to-be-able-to-do-a-multi-select-on-role-permissions-for
- Allows assigning project roles to groups with root roles
- Implements new methods that support assigning, editing, removing and
retrieving multiple project roles in project access, along with other
auxiliary methods
- Adds new events for updating and removing assigned roles
- Adapts `useProjectApi` to new methods that use new endpoints that
support multiple roles
- Adds the `multipleRoles` feature flag that controls the possibility of
selecting multiple roles on the UI
- Adapts `ProjectAccessAssign` to support multiple role, using the new
methods
- Adds a new `MultipleRoleSelect` component that allows you to select
multiple roles based on the `RoleSelect` component
- Adapts the `RoleCell` component to support either a single role or
multiple roles
- Updates the `access.spec.ts` Cypress e2e test to reflect our new logic
- Updates `access-service.e2e.test.ts` with tests covering the multiple
roles logic and covering some corner cases
- Updates `project-service.e2e.test.ts` to adapt to the new logic,
adding a test that covers adding access with `[roles], [groups],
[users]`
- Misc refactors and boy scouting
![image](https://github.com/Unleash/unleash/assets/14320932/d1cc7626-9387-4ab8-9860-cd293a0d4f62)
---------
Co-authored-by: David Leek <david@getunleash.io>
Co-authored-by: Mateusz Kwasniewski <kwasniewski.mateusz@gmail.com>
Co-authored-by: Nuno Góis <github@nunogois.com>
Currently the slack-app addon only posts to either the tagged channel
for the feature or the default channels. This PR adds a new field that
will allow you to configure the addon to post to both the default
channels and the tagged channel (s)
## About the changes
<!-- Describe the changes introduced. What are they and why are they
being introduced? Feel free to also add screenshots or steps to view the
changes if they're visual. -->
Adds projects user and group -usage information to the dialog shown when
user wants to delete a project role
<img width="670" alt="Skjermbilde 2023-08-10 kl 08 28 40"
src="https://github.com/Unleash/unleash/assets/707867/a1df961b-2d0f-419d-b9bf-fedef896a84e">
---------
Co-authored-by: Nuno Góis <github@nunogois.com>
Unless you set `additionalProperties` to `false`, additional properties
are always
allowed. By explicitly setting it to `true` the generated
examples contain additional properties, e.g.:
```json
{
"tokens": [
"aproject:development.randomstring",
"[]:production.randomstring"
],
"additionalProp1": {}
}
```
By removing the explicit annotation, we still allow additional
properties, but we don't get them in examples:
```json
{
"tokens": [
"aproject:development.randomstring",
"[]:production.randomstring"
]
}
```
### What
This PR changes the slack-app addon to use slack-api's scheduleMessage
instead of postMessage.
### Why
When using postMessage we had to find the channel id in order to be able
to post the message to the channel. scheduleMessage allows using the
channel name instead of the id, which saves the entire struggle of
finding the channel name. It did mean that we had to move to defining
blocks of content instead of the easier formatting we did with
postMessage.
### Message look
![image](https://github.com/Unleash/unleash/assets/177402/a9079c4d-07c0-4846-ad0c-67130e77fb3b)
https://linear.app/unleash/issue/2-1136/custom-root-roles-documentation
- [Adds documentation referencing custom root
roles](https://unleash-docs-git-docs-custom-root-roles-unleash-team.vercel.app/reference/rbac);
- [Adds a "How to create and assign custom root roles" how-to
guide](https://unleash-docs-git-docs-custom-root-roles-unleash-team.vercel.app/how-to/how-to-create-and-assign-custom-root-roles);
- Standardizes "global" roles to "root" roles;
- Standardizes "standard" roles to "predefined" roles to better reflect
their behavior and what is shown in our UI;
- Updates predefined role descriptions and makes them consistent;
- Updates the side panel description of the user form;
- Includes some boy scouting with some tiny fixes of things identified
along the way (e.g. the role form was persisting old data when closed
and re-opened);
Questions:
- Is it worth expanding the "Assigning custom root roles" section in the
"How to create and assign custom root roles" guide to include the steps
for assigning a root role for each entity (user, service account,
group)?
- Should this PR include an update to the existing "How to create and
assign custom project roles" guide? We've since updated the UI;
---------
Co-authored-by: Thomas Heartman <thomas@getunleash.ai>
## About the changes
Open API is creating 2 resources under the same URL path, we want them
aggregated:
```shell
$ curl -s https://us.app.unleash-hosted.com/ushosted/docs/openapi.json | jq '.paths."/api/admin/strategies/{name}" | keys'
[
"delete",
"get"
]
gaston@gaston-Summit-E16Flip-A12UCT: ~/poc/terraform-provider-unleash on main [!$]
$ curl -s https://us.app.unleash-hosted.com/ushosted/docs/openapi.json | jq '.paths."/api/admin/strategies/{strategyName}" | keys'
[
"put"
]
```
Note one is under `{name}` while the other is under `{strategyName}`
<!-- Thanks for creating a PR! To make it easier for reviewers and
everyone else to understand what your changes relate to, please add some
relevant content to the headings below. Feel free to ignore or delete
sections that you don't think are relevant. Thank you! ❤️ -->
## About the changes
<!-- Describe the changes introduced. What are they and why are they
being introduced? Feel free to also add screenshots or steps to view the
changes if they're visual. -->
Fixes an issue where project role deletion validation didn't validate
against project roles being connected to groups
We've been struggling with not getting hold of all channels. Reading the
documentation I found the support for next_cursor in the
response_metadata, as long as the response has a next_cursor, there are
responses left, so this PR adds a loop for querying until the next
cursor or the channels list is undefined or `''`. This should solve the
issue Wayfair had as well.
https://linear.app/unleash/issue/2-1311/add-a-new-prometheus-metric-with-custom-root-roles-in-use
As a follow-up to https://github.com/Unleash/unleash/pull/4435, this PR
adds a metric for total custom root roles in use by at least one entity:
users, service accounts, groups.
`custom_root_roles_in_use_total`
Output from `http://localhost:4242/internal-backstage/prometheus`:
```
# HELP process_cpu_user_seconds_total Total user CPU time spent in seconds.
# TYPE process_cpu_user_seconds_total counter
process_cpu_user_seconds_total 0.060755
# HELP process_cpu_system_seconds_total Total system CPU time spent in seconds.
# TYPE process_cpu_system_seconds_total counter
process_cpu_system_seconds_total 0.01666
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0.077415
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1691420275
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 199196672
# HELP nodejs_eventloop_lag_seconds Lag of event loop in seconds.
# TYPE nodejs_eventloop_lag_seconds gauge
nodejs_eventloop_lag_seconds 0
# HELP nodejs_eventloop_lag_min_seconds The minimum recorded event loop delay.
# TYPE nodejs_eventloop_lag_min_seconds gauge
nodejs_eventloop_lag_min_seconds 0.009076736
# HELP nodejs_eventloop_lag_max_seconds The maximum recorded event loop delay.
# TYPE nodejs_eventloop_lag_max_seconds gauge
nodejs_eventloop_lag_max_seconds 0.037683199
# HELP nodejs_eventloop_lag_mean_seconds The mean of the recorded event loop delays.
# TYPE nodejs_eventloop_lag_mean_seconds gauge
nodejs_eventloop_lag_mean_seconds 0.011063251638989169
# HELP nodejs_eventloop_lag_stddev_seconds The standard deviation of the recorded event loop delays.
# TYPE nodejs_eventloop_lag_stddev_seconds gauge
nodejs_eventloop_lag_stddev_seconds 0.0013618102764025837
# HELP nodejs_eventloop_lag_p50_seconds The 50th percentile of the recorded event loop delays.
# TYPE nodejs_eventloop_lag_p50_seconds gauge
nodejs_eventloop_lag_p50_seconds 0.011051007
# HELP nodejs_eventloop_lag_p90_seconds The 90th percentile of the recorded event loop delays.
# TYPE nodejs_eventloop_lag_p90_seconds gauge
nodejs_eventloop_lag_p90_seconds 0.011321343
# HELP nodejs_eventloop_lag_p99_seconds The 99th percentile of the recorded event loop delays.
# TYPE nodejs_eventloop_lag_p99_seconds gauge
nodejs_eventloop_lag_p99_seconds 0.013688831
# HELP nodejs_active_resources Number of active resources that are currently keeping the event loop alive, grouped by async resource type.
# TYPE nodejs_active_resources gauge
nodejs_active_resources{type="FSReqCallback"} 1
nodejs_active_resources{type="TTYWrap"} 3
nodejs_active_resources{type="TCPSocketWrap"} 5
nodejs_active_resources{type="TCPServerWrap"} 1
nodejs_active_resources{type="Timeout"} 1
nodejs_active_resources{type="Immediate"} 1
# HELP nodejs_active_resources_total Total number of active resources.
# TYPE nodejs_active_resources_total gauge
nodejs_active_resources_total 12
# HELP nodejs_active_handles Number of active libuv handles grouped by handle type. Every handle type is C++ class name.
# TYPE nodejs_active_handles gauge
nodejs_active_handles{type="WriteStream"} 2
nodejs_active_handles{type="ReadStream"} 1
nodejs_active_handles{type="Socket"} 5
nodejs_active_handles{type="Server"} 1
# HELP nodejs_active_handles_total Total number of active handles.
# TYPE nodejs_active_handles_total gauge
nodejs_active_handles_total 9
# HELP nodejs_active_requests Number of active libuv requests grouped by request type. Every request type is C++ class name.
# TYPE nodejs_active_requests gauge
nodejs_active_requests{type="FSReqCallback"} 1
# HELP nodejs_active_requests_total Total number of active requests.
# TYPE nodejs_active_requests_total gauge
nodejs_active_requests_total 1
# HELP nodejs_heap_size_total_bytes Process heap size from Node.js in bytes.
# TYPE nodejs_heap_size_total_bytes gauge
nodejs_heap_size_total_bytes 118587392
# HELP nodejs_heap_size_used_bytes Process heap size used from Node.js in bytes.
# TYPE nodejs_heap_size_used_bytes gauge
nodejs_heap_size_used_bytes 89642552
# HELP nodejs_external_memory_bytes Node.js external memory size in bytes.
# TYPE nodejs_external_memory_bytes gauge
nodejs_external_memory_bytes 1601594
# HELP nodejs_heap_space_size_total_bytes Process heap space size total from Node.js in bytes.
# TYPE nodejs_heap_space_size_total_bytes gauge
nodejs_heap_space_size_total_bytes{space="read_only"} 0
nodejs_heap_space_size_total_bytes{space="old"} 70139904
nodejs_heap_space_size_total_bytes{space="code"} 3588096
nodejs_heap_space_size_total_bytes{space="map"} 2899968
nodejs_heap_space_size_total_bytes{space="large_object"} 7258112
nodejs_heap_space_size_total_bytes{space="code_large_object"} 1146880
nodejs_heap_space_size_total_bytes{space="new_large_object"} 0
nodejs_heap_space_size_total_bytes{space="new"} 33554432
# HELP nodejs_heap_space_size_used_bytes Process heap space size used from Node.js in bytes.
# TYPE nodejs_heap_space_size_used_bytes gauge
nodejs_heap_space_size_used_bytes{space="read_only"} 0
nodejs_heap_space_size_used_bytes{space="old"} 66992120
nodejs_heap_space_size_used_bytes{space="code"} 2892640
nodejs_heap_space_size_used_bytes{space="map"} 2519280
nodejs_heap_space_size_used_bytes{space="large_object"} 7026824
nodejs_heap_space_size_used_bytes{space="code_large_object"} 983200
nodejs_heap_space_size_used_bytes{space="new_large_object"} 0
nodejs_heap_space_size_used_bytes{space="new"} 9236136
# HELP nodejs_heap_space_size_available_bytes Process heap space size available from Node.js in bytes.
# TYPE nodejs_heap_space_size_available_bytes gauge
nodejs_heap_space_size_available_bytes{space="read_only"} 0
nodejs_heap_space_size_available_bytes{space="old"} 1898360
nodejs_heap_space_size_available_bytes{space="code"} 7328
nodejs_heap_space_size_available_bytes{space="map"} 327888
nodejs_heap_space_size_available_bytes{space="large_object"} 0
nodejs_heap_space_size_available_bytes{space="code_large_object"} 0
nodejs_heap_space_size_available_bytes{space="new_large_object"} 16495616
nodejs_heap_space_size_available_bytes{space="new"} 7259480
# HELP nodejs_version_info Node.js version info.
# TYPE nodejs_version_info gauge
nodejs_version_info{version="v18.16.0",major="18",minor="16",patch="0"} 1
# HELP nodejs_gc_duration_seconds Garbage collection duration by kind, one of major, minor, incremental or weakcb.
# TYPE nodejs_gc_duration_seconds histogram
# HELP http_request_duration_milliseconds App response time
# TYPE http_request_duration_milliseconds summary
# HELP db_query_duration_seconds DB query duration time
# TYPE db_query_duration_seconds summary
db_query_duration_seconds{quantile="0.1",store="api-tokens",action="getAllActive"} 0.03091475
db_query_duration_seconds{quantile="0.5",store="api-tokens",action="getAllActive"} 0.03091475
db_query_duration_seconds{quantile="0.9",store="api-tokens",action="getAllActive"} 0.03091475
db_query_duration_seconds{quantile="0.95",store="api-tokens",action="getAllActive"} 0.03091475
db_query_duration_seconds{quantile="0.99",store="api-tokens",action="getAllActive"} 0.03091475
db_query_duration_seconds_sum{store="api-tokens",action="getAllActive"} 0.03091475
db_query_duration_seconds_count{store="api-tokens",action="getAllActive"} 1
# HELP feature_toggle_update_total Number of times a toggle has been updated. Environment label would be "n/a" when it is not available, e.g. when a feature toggle is created.
# TYPE feature_toggle_update_total counter
# HELP feature_toggle_usage_total Number of times a feature toggle has been used
# TYPE feature_toggle_usage_total counter
# HELP feature_toggles_total Number of feature toggles
# TYPE feature_toggles_total gauge
feature_toggles_total{version="5.3.0"} 31
# HELP users_total Number of users
# TYPE users_total gauge
users_total 1011
# HELP projects_total Number of projects
# TYPE projects_total gauge
projects_total 4
# HELP environments_total Number of environments
# TYPE environments_total gauge
environments_total 10
# HELP groups_total Number of groups
# TYPE groups_total gauge
groups_total 5
# HELP roles_total Number of roles
# TYPE roles_total gauge
roles_total 11
# HELP custom_root_roles_total Number of custom root roles
# TYPE custom_root_roles_total gauge
custom_root_roles_total 3
# HELP custom_root_roles_in_use_total Number of custom root roles in use
# TYPE custom_root_roles_in_use_total gauge
custom_root_roles_in_use_total 2
# HELP segments_total Number of segments
# TYPE segments_total gauge
segments_total 5
# HELP context_total Number of context
# TYPE context_total gauge
context_total 7
# HELP strategies_total Number of strategies
# TYPE strategies_total gauge
strategies_total 5
# HELP client_apps_total Number of registered client apps aggregated by range by last seen
# TYPE client_apps_total gauge
client_apps_total{range="allTime"} 0
client_apps_total{range="30d"} 0
client_apps_total{range="7d"} 0
# HELP saml_enabled Whether SAML is enabled
# TYPE saml_enabled gauge
saml_enabled 1
# HELP oidc_enabled Whether OIDC is enabled
# TYPE oidc_enabled gauge
oidc_enabled 0
# HELP client_sdk_versions Which sdk versions are being used
# TYPE client_sdk_versions counter
# HELP optimal_304_diffing Count the Optimal 304 diffing with status
# TYPE optimal_304_diffing counter
# HELP db_pool_min Minimum DB pool size
# TYPE db_pool_min gauge
db_pool_min 0
# HELP db_pool_max Maximum DB pool size
# TYPE db_pool_max gauge
db_pool_max 4
# HELP db_pool_free Current free connections in DB pool
# TYPE db_pool_free gauge
db_pool_free 0
# HELP db_pool_used Current connections in use in DB pool
# TYPE db_pool_used gauge
db_pool_used 4
# HELP db_pool_pending_creates how many asynchronous create calls are running in DB pool
# TYPE db_pool_pending_creates gauge
db_pool_pending_creates 0
# HELP db_pool_pending_acquires how many acquires are waiting for a resource to be released in DB pool
# TYPE db_pool_pending_acquires gauge
db_pool_pending_acquires 24
```
This PR stabilizes the playground and feature types endpoints by
changing their tag from 'Unstable' to respectively 'Playground' and
'Feature Types'. It also moves the existing feature type endpoint (which
was previously tagged as 'Features') to the 'Feature Types' tag.
In a new fresh Unleash instance with cache enabled this can cause
feature toggles to never get updated.
We saw in our client that the ETag was ETag: "60e35fba:null" Which
looked incorrect for us.
I also did manual testing and if the andWhere had a value of largerThan
higher than whatever the id was then we would get back { max: null }.
This should fix that issue.
This PR adds an e2e test to the OpenAPI tests that checks that all
openapi operations have both summaries and descriptions. It also fixes
the few schemas that were missing one or the other.
Enable strict schema validation by default. It can still be overridden
by explicitly setting it to false.
I've also fixed the validation errors that appeared when turning it on.
I've opted for the simplest route and changed the schemas to comply with
the tests.
## About the changes
We are losing some events because of not having "created by" which is
required by a constraint in the DB. One scenario where this can happen
is with the default user admin, because it doesn't have a username or an
email (unless configured by the administrator).
Our code makes assumptions on the existence of one of these 2 attributes
(e.g.
248118af7c/src/lib/services/user-service.ts (L220-L222)).
Event lost metrics:
![Screenshot from 2023-07-24
14-17-20](https://github.com/Unleash/unleash/assets/455064/9b406ad0-bbcb-4263-98dc-74ddd307a5a2)
## Discussion points
The solution proposed here is falling back to a default value. I've
chosen `"admin"` because it covers one of the use cases, but it can also
be `"system"` mimicking
248118af7c/src/lib/services/user-service.ts (L32)
which is used as a default, or `"unknown"` which is sometimes used as a
default:
248118af7c/src/lib/services/project-service.ts (L57)
Anyway, I believe it's better not to lose the event rather than be
accurate with the "created by" that can be fixed later