-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[api-docs] raise memory limit again #107065
[api-docs] raise memory limit again #107065
Conversation
Pinging @elastic/kibana-operations (Team:Operations) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you also mind going ahead and making the change in .buildkite/scripts/steps/on_merge_ts_refs_api_docs.sh
as well?
💚 Build SucceededMetrics [docs]
To update your PR or re-run it, just comment with: |
* [api-docs] raise memory limit again * update buildkite script too Co-authored-by: spalger <[email protected]>
💔 Backport failed
Successful backport PRs will be merged automatically after passing CI. To backport manually run: |
* [api-docs] raise memory limit again * update buildkite script too Co-authored-by: spalger <[email protected]> # Conflicts: # test/scripts/jenkins_baseline.sh
* [api-docs] raise memory limit again * update buildkite script too Co-authored-by: spalger <[email protected]> # Conflicts: # test/scripts/jenkins_baseline.sh
* [api-docs] raise memory limit again * update buildkite script too Co-authored-by: spalger <[email protected]> Co-authored-by: Spencer <[email protected]> Co-authored-by: spalger <[email protected]>
* [api-docs] raise memory limit again * update buildkite script too Co-authored-by: spalger <[email protected]>
In #106735 I raised the memory limit of
node scripts/build_api_docs
for PRs and hourly jobs, but forgot that baseline jobs use a different script. This PR updates the memory limit in both places because looking at the results it seems that raising the limit to 8000 helped, but didn't eliminate OOM issues. The numbers on PRs look really good, with only one failure in a PR which ran after the memory limit was increased:(only includes successful jobs, x-axis is based on the time when the baseline jobs used completed, the yellow line shows where we merged the memory limit increase)
The results of baseline and hourly jobs looks similar:
