-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failing test: X-Pack Security API Integration Tests (Session Concurrent Limit).x-pack/test/security_api_integration/tests/session_concurrent_limit/cleanup·ts - security APIs - Session Concurrent Limit Session Concurrent Limit cleanup should properly clean up sessions that exceeded concurrent session limit even for multiple providers #149091
Failing test: X-Pack Security API Integration Tests (Session Concurrent Limit).x-pack/test/security_api_integration/tests/session_concurrent_limit/cleanup·ts - security APIs - Session Concurrent Limit Session Concurrent Limit cleanup should properly clean up sessions that exceeded concurrent session limit even for multiple providers #149091
Comments
Pinging @elastic/kibana-security (Team:Security) |
New failure: CI Build - main |
Skipped. main: 74d9321 |
Duplicate of #149090 |
Resolves #148914 Resolves #149090 Resolves #149091 Resolves #149092 In this PR, I'm making the following Task Manager bulk APIs retry whenever conflicts are encountered: `bulkEnable`, `bulkDisable`, and `bulkUpdateSchedules`. To accomplish this, the following had to be done: - Revert the original PR (#147808) because the retries didn't load the updated documents whenever version conflicts were encountered and the approached had to be redesigned. - Create a `retryableBulkUpdate` function that can be re-used among the bulk APIs. - Fix a bug in `task_store.ts` where `version` field wasn't passed through properly (no type safety for some reason) - Remove `entity` from being returned on bulk update errors. This helped re-use the same response structure when objects weren't found - Create a `bulkGet` API on the task store so we get the latest documents prior to a ES refresh happening - Create a single mock task function that mocks task manager tasks for unit test purposes. This was necessary as other places were doing `as unknown as BulkUpdateTaskResult` and escaping type safety Flaky test runs: - [Framework] https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/1776 - [Kibana Security] https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/1786 Co-authored-by: kibanamachine <[email protected]>
Resolves elastic#148914 Resolves elastic#149090 Resolves elastic#149091 Resolves elastic#149092 In this PR, I'm making the following Task Manager bulk APIs retry whenever conflicts are encountered: `bulkEnable`, `bulkDisable`, and `bulkUpdateSchedules`. To accomplish this, the following had to be done: - Revert the original PR (elastic#147808) because the retries didn't load the updated documents whenever version conflicts were encountered and the approached had to be redesigned. - Create a `retryableBulkUpdate` function that can be re-used among the bulk APIs. - Fix a bug in `task_store.ts` where `version` field wasn't passed through properly (no type safety for some reason) - Remove `entity` from being returned on bulk update errors. This helped re-use the same response structure when objects weren't found - Create a `bulkGet` API on the task store so we get the latest documents prior to a ES refresh happening - Create a single mock task function that mocks task manager tasks for unit test purposes. This was necessary as other places were doing `as unknown as BulkUpdateTaskResult` and escaping type safety Flaky test runs: - [Framework] https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/1776 - [Kibana Security] https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/1786 Co-authored-by: kibanamachine <[email protected]>
New failure: CI Build - 8.8 |
New failure: CI Build - 8.8 |
Ran another flaky test runner just to be sure, but this looks tied to a series of CI failures on Friday. |
…ssion limit for users (elastic#174748) ## Summary Closes elastic#149091 This PR addresses the potential issue of a session not being found in the session index by introducing a timeout before attempting to write the next one. Passing these [changes through FTR](https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/4854) make it pass 100% of the time with 400 test runs.
## Summary This PR is for troubleshooting elastic#149091 It duplicates the timeout check per session from the `...legacy sessions` test (see elastic#174748) for the `...multiple providers` test. Note: we are not seeing the additional log of 'Failed to write a new session', in any of the recent failures. Could not reproduce the issue with a flaky test runner: https://buildkite.com/elastic/kibana-flaky-test-suite-runner/builds/4949
New failure: CI Build - 8.13 |
New failure: CI Build - 8.12 |
New failure: CI Build - main |
New failure: CI Build - 8.13 |
New failure: CI Build - main |
New failure: CI Build - main |
New failure: CI Build - main |
New failure: kibana-on-merge - 8.13 |
New failure: kibana-on-merge - 8.13 |
New failure: kibana-elasticsearch-snapshot-verify - 8.13 |
New failure: kibana-on-merge - main |
latest failure:
|
New failure: kibana-on-merge - main |
New failure: kibana-on-merge - main |
Came up on interesting thing, the --xpack.security.session.cleanupInterval=5h The log shows us the we invoke the cleanup task in the very first test and get 500.
That means the cleanup job itself was already running, which shouldn't be the case, because we set interval to So if the cleanup job is running on some different interval it might corrupt sessions from the following tests even before we invoke cc @azasypkin Probably you can shed some light on it, I don't have that much context around FTR setup in general. |
New failure: kibana-elasticsearch-snapshot-verify - 8.15 |
Same reason as described in #149091 (comment) |
A test failed on a tracked branch
First failure: CI Build - main
The text was updated successfully, but these errors were encountered: