This change is just a refactor, removing code that's no longer used. Instead of
checking just whether a segment is in use, we now extract the list of
strategies that use this segment. This is slightly more costly,
perhaps, but it will be necessary for the upcoming implementation.
This PR changes the payload of the strategiesBySegment endpoint when the
flag is active. In addition to returning just the strategies, the object
will also contain a new property, called `changeRequestStrategies`
containing the strategies that are used in change requests.
This PR does not update the schema. That can be done later when the
changes go into beta. This also allows us some time to iterate on the
payload without changing the public API.
## Discussion points:
Should `strategies` and `changeRequestStrategies` ever contain
duplicates? Take this scenario:
- Strategy S uses segment T.
- There is an open change request that updates the list of segments for
S to T and a new segment U.
- In this case, strategy S would show up both in `strategies` _and_ in
`changeRequestStrategies`.
We have two options:
1. Filter the list of change request strategies, so that they don't
contain any duplicates (this is currently how it's implemented)
2. Ignore the duplicates and just send both lists as is.
We're doing option 2 for now.
Removing a user from a project was impossible if you only had 1 owner.
It worked fine when having more than an owner. This should fix it and
we'll add tests later
In short the issue is that after our last seen improvements, we did not
update where we are getting last_seen field. It was still using features
table, which is not the source of last seen anymore.
## PR Description
https://linear.app/unleash/issue/2-1645/address-post-mortem-action-point-all-flags-should-be-runtime
Refactor with the goal of ensuring that flags are runtime controllable,
mostly focused on the current scheduler logic.
This includes the following changes:
- Moves scheduler into its own "scheduler" feature folder
- Reverts dependency: SchedulerService takes in the MaintenanceService,
not the other way around
- Scheduler now evaluates maintenance mode at runtime instead of relying
only on its mode state (active / paused)
- Favors flag checks to happen inside the scheduled methods, instead of
controlling whether the method is scheduled at all (favor runtime over
startup)
- Moves "account last seen update" to scheduler
- Updates tests accordingly
- Boyscouting
Here's a manual test showing this behavior, where my local instance was
controlled by a remote instance. Whenever I toggle `maintenanceMode`
through a flag remotely, my scheduled functions stop running:
https://github.com/Unleash/unleash/assets/14320932/ae0a7fa9-5165-4c0b-9b0b-53b9fb20de72
Had a look through all of our current flags and it *seems to me* that
they are all used in a runtime controllable way, but would still feel
more comfortable if this was double checked, since it can be complex to
ensure this.
The only exception to this was `migrationLock`, which I believe is OK,
since the migration only happens at the start anyways.
## Discussion / Questions
~~Scheduler `mode` (active / paused) is currently not *really* being
used, along with its respective methods, except in tests. I think this
could be a potential footgun. Should we remove it in favor of only
controlling the scheduler state through maintenance mode?~~ Addressed in
7c52e3f638
~~The config property `disableScheduler` is still a startup
configuration, but perhaps that makes sense to leave as is?~~
[Answered](https://github.com/Unleash/unleash/pull/5363#issuecomment-1819005445)
by @FredrikOseberg, leaving as is.
Are there any other tests we should add?
Is there anything I missed?
Identified some `setInterval` and `setTimeout` that may make sense to
leave as is instead of moving over to the scheduler service:
- ~~`src/lib/metrics` - This is currently considered a `MetricsMonitor`.
Should this be refactored to a service instead and adapt these
setIntervals to use the scheduler instead? Is there anything special
with this we need to take into account? @chriswk @ivarconr~~
[Answered](https://github.com/Unleash/unleash/pull/5363#issuecomment-1820501511)
by @ivarconr, leaving as is.
- ~~`src/lib/proxy/proxy-repository.ts` - This seems to have a complex
and specific logic currently. Perhaps we should leave it alone for now?
@FredrikOseberg~~
[Answered](https://github.com/Unleash/unleash/pull/5363#issuecomment-1819005445)
by @FredrikOseberg, leaving as is.
- `src/lib/services/user-service.ts` - This one also seems to be a bit
more specific, where we generate new timeouts for each receiver id.
Might not belong in the scheduler service. @Tymek
This PR adds the ability to detect which strategies use a specific
segment in active change requests.
It does not wire this functionality up to anything just yet. Follow-up
PRs will integrate this with the segment service and eventually with the
front end.
The issue was that we all features were created exactly in same time,
and our feature counter waas expecting time to be unique to feature,
which was not the case.
Instead of throwing an error when the project doesn't exist, we say that
the names are valid, because we have nothing to say that they're not.
Presumably there is already something in place to prevent you from
importing into a non-existent project.
## About the changes
This feature allows our Enterprise customers to configure banners to be
displayed on their Unleash instance for all their users to see and
interact with. Previously known as "internal message banners".
Optimizations:
1. Removed extra round trip to database to count environments
2. Removed extra round trip to database to count features
Fixes:
Currently, we were using a very optimistic query to set correct limit
and offset. This breaks as soon we we join tags.
` query = query
.select(selectColumns)
.limit(limit * environmentCount)
.offset(offset * environmentCount);`
The solution was to use common table expressions, so we could count and
rank features.
## About the changes
While working on Terraform I identified some issues with how our backend
handled some requests. This happened because the Terraform client was
unaware of new fields and was not sending them. This resulted in bad
behavior as some of those missing fields were treated in the backend as
`null` and removed existing data from the DB.
This ADR aims to shed some light on the problem and specifies how our
server should handle these requests.
---------
Co-authored-by: Nuno Góis <github@nunogois.com>
<!-- Thanks for creating a PR! To make it easier for reviewers and
everyone else to understand what your changes relate to, please add some
relevant content to the headings below. Feel free to ignore or delete
sections that you don't think are relevant. Thank you! ❤️ -->
## About the changes
<!-- Describe the changes introduced. What are they and why are they
being introduced? Feel free to also add screenshots or steps to view the
changes if they're visual. -->
<!-- Does it close an issue? Multiple? -->
Closes #
<!-- (For internal contributors): Does it relate to an issue on public
roadmap? -->
<!--
Relates to [roadmap](https://github.com/orgs/Unleash/projects/10) item:
#
-->
### Important files
<!-- PRs can contain a lot of changes, but not all changes are equally
important. Where should a reviewer start looking to get an overview of
the changes? Are any files particularly important? -->
## Discussion points
<!-- Anything about the PR you'd like to discuss before it gets merged?
Got any questions or doubts? -->
Rename event to SCHEDULED_CHANGE_REQUEST_EXECUTED
This event will be triggered when the executor runs a scheduled change
request.
The ChangeRequestApplied event will remain as is (going out to project
members - but will have a scheduled = true property in the data if it
scheduled.
This new event will fire on execution of the schedule and have a result
= "failed" | "succeeded" property.
Because notifications are tied to events, this notification will go out
to the creator and the applier
---------
Signed-off-by: andreas-unleash <andreas@getunleash.ai>
This PR updates the segment usage counting to also include segment usage
in pending change requests.
The changes include:
- Updating the schema to explicitly call out that change request usage
is included.
- Adding two tests to verify the new features
- Writing an alternate query to count this data
Specifically, it'll update the part of the UI that tells you how many
places a segment is used:
![image](https://github.com/Unleash/unleash/assets/17786332/a77cf932-d735-4a13-ae43-a2840f7106cb)
## Implementation
Implementing this was a little tricky. Previously, we'd just count
distinct instances of feature names and project names on the
feature_strategy table. However, to merge this with change request data,
we can't just count existing usage and change request usage separately,
because that could cause duplicates.
Instead of turning this into a complex DB query, I've broken it up into
a few separate queries and done the merging in JS. I think that's more
readable and it was easier to reason about.
Here's the breakdown:
1. Get the list of pending change requests. We need their IDs and their
project.
2. Get the list of updateStrategy and addStrategy events that have
segment data.
3. Take the result from step 2 and turn it into a dictionary of segment
id to usage data.
4. Query the feature_strategy_segment and feature_strategies table, to
get existing segment usage data
5. Fold that data into the change request data.
6. Perform the preexisting segment query (without counting logic) to get
other segment data
7. Enrich the results of the query from step 2 with usage data.
## Discussion points
I feel like this could be done in a nicer way, so any ideas on how to
achieve that (whether that's as a db query or just breaking up the code
differently) is very welcome.
Second, using multiple queries obviously yields more overhead than just
a single one. However, I do not think this is in the hot path, so I
don't consider performance to be critical here, but I'm open to hearing
opposing thoughts on this of course.
This PR fixes a couple of issues with the pagination bar:
* Fixes an issue where padding bottom would be broken due to disabling
padding on the parent container
* Remove padding on the entire table to create more space and remove
header bar border radius as per discussion with @nicolaesocaciu
This PR makes changes to how the project overview skeleton screen works.
Important changes:
- Add skeleton screens to missing elements, creating a more
comprehensive loading screen
- Split the page into different loading sections, so that we can load
the table when we fetch the next page without affecting the rest of the
page.
https://www.loom.com/share/e5d30dc897ac488ea80cfae11ffab646
Next steps:
* Hide bar if total is less than 25
* Add FE testing