## About the changes
EdgeService is the only place where we use active tokens validation in
bulk. By switching to validating from the cache, we no longer need a
method to return all active tokens from the DB.
We talked with @nunogois that this test is testing migration from 2023,
but time has passed and the migration is working properly, so we think
to cut down on test run time, we can remove it.
## About the changes
We've identified that Bearer token middleware is not working for
/enterprise instance.
Looking at a few lines below:
88e3b1b79e/src/lib/app.ts (L81-L84)
we can see that we were missing the basePath in the use definition.
## About the changes
Current state, when returning the HTML entry point from the server,
there are no headers attached. We encountered an issue with a deployment
and this had an impact for us.
A brief description:
1. We deployed the most recent version. Noticed an unrelated issue.
2. Users tried to use the most recent version and due to their client
cache, requested assets that did not exist in the newest version.
3. Our cache layer cached the assets that were not there with the HTML
response. It had to infer the type based on the filename because there
was no attached `Content-Type` header. This cache was very sticky.
4. After rolling back we saw the HTML response (from the cache) instead
of the appropriate response from the upstream Unleash application.
This PR does a few things.
1. When responding with the HTML entry point, it adds header
(`Content-Type: text/html`).
2. When the client is requesting an asset (a path that ends with an
extension), it also instructs the resource not to be cached
(`Cache-Control: no-cache`) and returns a 404. This will prevent misses
from getting cached.
## Discussion points
To me, there doesn't seem to be a lot of test infra on serving the SPA
application. If that is an error, please indicate where that is and an
appropriate test can be added.
Adds a postgres_version gauge to allow us to see postgres_version in
prometheus and to post it upstream when version checking. Depends on
https://github.com/bricks-software/version-function/pull/20 to be merged
first to ensure our version-function doesn't crash when given the
postgres-version data.
1. Added new schema and tests
2. Controller also accepts the data
3. Also sending fake data from frontend currently
Next steps, implement service/store layer and frontend
This fixes the case when a customer have thousands of strategies causing
the react UI to crash. We still consider it incorrect to use that amount
of strategies and this is more a workaround to help the customer out of
a crashing state.
We put it behind a flag called `manyStrategiesPagination` and plan to
only enable it for the customer in trouble.
Now we are also sending project id to prometheus, also querying from
database. This sets us up for grafana dashboard.
Also put the metrics behind flag, just incase it causes cpu/memory
issues.
This PR updates the project service to automatically create a project id
if it is not provided. The feature is behind a flag. If an ID is
provided, it will still attempt to use that ID instead.
This PR adds a function to automatically generate a project ID on
creation. Using this when the id is missing will be handled in following
PRs.
The function uses the existing `slug` package to create a slug, and then
takes the 12 characters of a uuidv4 string to generate an ID.
The included tests check that the 12 character hash is added and that
the resulting string is url friendly (by checking that
`encodeURIComponent` doesn't change it).
We could also test a lot of edge cases (such as dealing with double
spaces, trimming the string, etc), but I think that's better handled by
the library itself (but you can check out what I removed in
2d9bcb6390
for an idea).
The function doesn't really need to be in the service; it could be moved to a util. But for proximity, I'll create it here first.
Regarding ticket #6892:
I would like to enable the use of a CA certificate without requiring
other certificates. This would be useful for AWS Helm, as AWS only
provides a single PEM file for DB connections.
We are getting questions from engineers, why I do not see lifecycle. The
same will happen with our customers. Now customers will see lifecycle
component unified across features.
Final rank has always been ordering correctly by default. But after 5.12
I see some issues that sometimes it is not ordered. Just to be extra
sure, I am for ordering it.
Add a flag to enable/disable the new UI for project creation.
This flag is separate from the impl on the back end so that we can
enable one without the other (but uses flag dependencies in Unleash, so
that we can never enable the new UI without the new back end).
I have not set the flag to `true` in server startup because the form
doesn't work yet, so it's a manual step for now.
This PR removes the workaround introduced in
https://github.com/Unleash/unleash/pull/6931. After
https://github.com/ivarconr/unleash-enterprise/pull/1268 has been
merged, this should be safe to apply.
Notably, this PR:
- tightens up the type for the enable change request function, so we can
use that to inform the code
- skips trying to do anything with an empty array
The last point is less important than it might seem because both the env
validation and the current implementation of the callback is essentially
a no-op when there are no envs. However, that's hard to enforce. If we
just exit out early, then at least we know nothing happens.
Optionally, we could do something like this instead, but I'm not sure
it's better or worse. Happy to take input.
```ts
const crEnvs = newProject.changeRequestEnvironments ?? []
await this.validateEnvironmentsExist(crEnvs.map((env) => env.name));
const changeRequestEnvironments =
await enableChangeRequestsForSpecifiedEnvironments(crEnvs,);
data.changeRequestEnvironments = changeRequestEnvironments;
```
This PR improves the handling of change request enables on project
creation in two ways:
1. We now verify that the envs you try to enable CRs for exist before
passing them on to the enterprise functionality.
2. We include data about environments and change request environments in
the project created events.
Due to how we handle redirects of embedded proxy, we ended up counting
the same request twice. This PR adds a boolean to res.locals which we
then check if set to avoid double counting.
## About the changes
What's going on is the following:
1. When a token is not found in the token's cache we try to find it in
the db
2. To prevent a denial of service attack using invalid tokens, we cache
the invalid tokens so we don't hit the db.
3. The issue is that we stored this token in the cache regardless we
found it or not. And if the token was valid the first time we'd add a
timestamp to avoid querying this token again the next time.
4. The next iteration the token should be in the cache:
54383a6578/src/lib/services/api-token-service.ts (L162)
but for some reason it is not and therefore we have to make a query. But
this is where the query prevention mechanism kicks in because it finds
the token in the cache and kicks us out. This PR fixes this by only
storing in the cache for misses if not found:
54383a6578/src/lib/services/api-token-service.ts (L164-L165)
The token was added to the cache because we were not checking if it had
expired. Now we added a check and we also have a log for expired tokens.
Some improvement opportunities:
- I don't think we display that a token has expired in the UI which
probably led to this issue
- When a token expired we don't display a specific error message or
error response saying that which is not very helpful for users
This PR introduces a configuration option (`authentication.demoAllowAdminLogin`) that allows you to log in as admin when using demo authentication. To do this, use the username `admin`.
## About the changes
The `admin` user currently cannot be accessed in `demo` authentication
mode, as the auth mode requires only an email to log in, and the admin
user is not created with an email. This change allows for logging in as
the admin user only if an `AUTH_DEMO_ALLOW_ADMIN_LOGIN` is set to `true`
(or the corresponding `authDemoAllowAdminLogin` config is enabled).
<!-- Does it close an issue? Multiple? -->
Closes#6398
### Important files
[demo-authentication.ts](https://github.com/Unleash/unleash/compare/main...00Chaotic:unleash:feat/allow_admin_login_using_demo_auth?expand=1#diff-c166f00f0a8ca4425236b3bcba40a8a3bd07a98d067495a0a092eec26866c9f1R25)
## Discussion points
Can continue discussion of [this
comment](https://github.com/Unleash/unleash/pull/6447#issuecomment-2042405647)
in this PR.
---------
Co-authored-by: Thomas Heartman <thomasheartman+github@gmail.com>
This commit adds an `environments` property to the project created
payload. The list contains only the projects that the project has
enabled.
The point of adding it is that it gives you a better overview over
what you have created.
This PR adds the `projectListNewCards` flag to the constant defined in
`experimental.ts`. This should allow the API to pass that value to the
front end.
## About the changes
Add time metrics to relevant queries:
- get
- getAll
- bulkInsert
- count
- exists
- get
Ignored because might not be that relevant:
- insert
- delete
- deleteAll
- update
## About the changes
This PR removes the feature flag `queryMissingTokens` that was fully
rolled out.
It introduces a new way of checking edgeValidTokens controlled by the
flag `checkEdgeValidTokensFromCache` that relies in the cached data but
hits the DB if needed.
The assumption is that most of the times edge will find tokens in the
cache, except for a few cases in which a new token is queried. From all
tokens we expect at most one to hit the DB and in this case querying a
single token should be better than querying all the tokens.
This makes it configurable either through a single JSON file with all
three certificates as separate keys or via separate files per
ca/cert/key key.
fixes#6718
I've tried to use/add the audit info to all events I could see/find.
This makes this PR necessarily huge, because we do store quite a few
events.
I realise it might not be complete yet, but tests
run green, and I think we now have a pattern to follow for other events.
This PR adds an optional function parameter to the `createProject`
function that is intended to enable change requests for the newly
created project.
The assumption is that all the logic within will be decided in the
enterprise impl. The only thing we want to verify here is that it is
called after the project has been created.
This PR adds functionality to the `createProject` function to choose
which environments should be enabled when you create a new project. The
new `environments` property is optional and omitting it will make it
work exactly as it does today.
The current implementation is fairly strict. We have some potential
ideas to make it easier to work with, but we haven't agreed on any yet.
Making it this strict means that we can always relax the rules later.
The rules are (codified in tests):
- If `environments` is not provided, all non-deprecated environments are
enabled
- If `environments` is provided, only the environments listed are
enabled, regardless of whether they're deprecated or not
- If `environments` is provided and is an empty array, the service
throws an error. The API should dilsallow that via the schema anyway,
but this catches it in case it sneaks in some other way.
- If `environments` is provided and contains one or more environments
that don't exist, the service throws an error. While we could ignore
them, that would lead to more complexity because we'd have to also check
that the at least one of the environments is valid. It also leads to
silent ignoring of errors, which may or may not be good for the user
experience.
The API endpoint for this sits in enterprise, so no customer-facing
changes are part of this.
We encountered an issue with a customer because this query was returning
3 million rows. The problem arose from each instance reporting
approximately 100 features, with a total of 30,000 instances. The query
was joining these, thus multiplying the data. This approach was fine for
a reasonable number of instances, but in this extreme case, it did not
perform well.
This PR modifies the logic; instead of performing outright joins, we are
now grouping features by environment into an array, resulting in just
one row returned per instance.
I tested locally with the same dataset. Previously, loading this large
instance took about 21 seconds; now it has reduced to 2 seconds.
Although this is still significant, the dataset is extensive.
Previously, we were not validating that the ID was a number, which
sometimes resulted in returning our database queries (source code) to
the frontend. Now, we have validation middleware.
Previously, we were extracting the project from the token, but now we
will retrieve it from the session, which contains the full list of
projects.
This change also resolves an issue we encountered when the token was a
multi-project token, formatted as []:dev:token. Previously, it was
unable to display the exact list of projects. Now, it will show the
exact project names.
<details>
<summary>Feature Flag Cleanup</summary>
| Stale Flag | Value |
| ---------- | ------- |
| stripClientHeadersOn304 | true |
</details>
<details>
<summary>Trigger</summary>
https://github.com/Unleash/unleash/issues/6559#issuecomment-2058848984
</details>
<details>
<summary>Bot Commands</summary>
`@gitar-bot cleanup stale_flag=value` will cleanup a stale feature flag.
Replace `stale_flag` with the name of the stale feature flag and `value`
with either `true` or `false`.
</details>
---------
Co-authored-by: Gitar Bot <noreply@gitar.co>