-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SIEM][ML] Some jobs require 2 gigs of RAM per node in cloud #45316
Comments
Pinging @elastic/siem |
@blaklaybul -- to clarify on the implementation details, this issue happens on the installation of jobs, and since we're using the |
Are the jobs failing to be created or are they failing to start? If they are just failing to start due to space limitations, then you can start individual jobs one at a time using We are thinking through how we can better handle memory management wrt many jobs and large jobs and cloud free tiers. This is wip. cc @droberts195 |
They're failing to be created. In this instance it's just the Also, for reference, we do use the |
Kibana version:
7.4.0-BC3 (cloud)
Original install method (e.g. download page, yum, from source, etc.):
Cloud
Describe the bug:
We should an error on cannot install jobs that require more ML Node memory on page load whenever we have only 1 gig of RAM dedicated for a ML node. Some jobs within the template require more than 1 gig of RAM to run.
Steps to reproduce:
If you select 1 gig of RAM for your ML node (which is the default) like so:
And then go to SIEM page you will get errors on every page load and every time you click the Anomaly button.
Stack traces from the error toaster:
Expected behavior:
Not to spam users constantly about not being able to install jobs that require more memory. Instead probably let the user know through another UI/UX friendly way they cannot run some of the jobs with only 1 gig of memory.
Workarounds:
Bump up your 1 gig of RAM of ML to 2 gig, even temporarily, and then load the SIEM page so it can install its jobs. Then you can bump it back down to 1 gig if you want to.
Another option is to manually create the jobs which require more memory in the ML page and give them dummy values so we do not try to create the jobs that require more memory.
The text was updated successfully, but these errors were encountered: