-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve handling of large mappings #1540
Comments
Thanks, the reason we fetch all of the fields is so that we can figure out if there are any mapping inconsistencies that Kibana needs to know about when making fields available for aggregations and search. Of course, as in your case, this can be a HUGE request. We cache the results of the post processing in Elasticsearch so that we don't have todo it again, but the first hit could be big The plan right now is to allow Elasticsearch to script responses so that we can pre-process the mapping on the Elasticsearch side. See here: elastic/elasticsearch#7401 |
Excellent :) Yes, right now we're looking at a 22Mb response with ~250,000 fields, my On Mon, Oct 6, 2014 at 4:24 PM, Rashid Khan [email protected]
|
Ouch! One more use-case to keep in mind might be multi-tenant clusters, where one user using Kibana should not load all other users' fields/mappings into the browser. |
Hey folks, and updates on this? Really want to try out Kibana 4 but it's a non-starter due to this issue. |
This problem also exists in kibana3.I was wondering why kibana loaded |
There might be an easier way to fix this. I don't know if this is a half-assed fix, because the problem might present itself elsewhere. |
Its not only fields that have timestamp, but rather any date fields. Also, we're going to need to fetch the entire mapping anyway as we need to process and cache it. Its possible the bulk of this work could be done on the backend though. Note that we'll still need a way to display these in the field list, 250,000 field is just a lot of fields |
via @maguec When loading a kibana setup with thousands of fields Javascript causes chrome to crash with the following error. In firefox the script takes too long to unmarshal all of the fields and asks if I wish to continue and pins the CPU. Is there a way to turn off the filed typing?
Heap and CPU profile information are available below http://shokunin.co/upload/kibana4.heapsnapshot |
Got the same error as well, directly after opening up Kibana4 after setup (/#/discover?_g=()):
We have logstash with currently 14 days on indizes, piping multiple different applications in Logstash. |
Seems like this info should be cached by the K4 server, rather than thrust on the browser. Either way, our mapping is also too big for K4 to be useful, so +1. |
@rashidkpc as I asked in #3674, do you know where the pain starts? I went down from +1k to 250 and it's still unusable. Will it lag even if closed indexes have +1k fields? |
+1 |
I had same problem. |
I was able to able to work around the issue simply by reducing the number of fields. This is with Kibana 4.1. |
In the index, the number of fields is small. But context is very large. Now, I set discover:sampleSize=5. It faster than before. |
One reason, I have seen this happen is due to unescaped "" in the field values. For example, "c:\users" field value is interpreted as unicode(\user) and therefore consumes a lot of time. |
I have added some details around how to replicate this issue in #5331. Looks related to the way that we load fields using the |
+1 for a fix |
1 similar comment
+1 for a fix |
+1 |
I have similar problem. I our case, ~1800 indexes all in the same index pattern. Each index has ~380 fields. The indexes are created 1 per tenant, but all tenants have the same mapping schema, so this is not a problem. My problem manifests itself where the payload exceeds the allowable size of the javascript object it is being stuffed into on the client, which throws an internal javascript exception. However, the WORST issue is that the "refresh mapping" completely ignores that fact that it had an exception on the GET and POSTS back an empty array for the kibana field mappings, wiping out all prior field mappings! Minimally a fix should be made to not POST back the empty data. I have found a workaround by creating an index pattern that just target one index. As I noted all our indexes have the same mappings, so I can the use fiddler to capture the POST for single index pattern and replace it with my "all indexes" pattern and successfully post back the field mappings for the ~380 unique fields among all the indexes. Please address this, this is a huge maintenance problem and prone to wipe our our kibana mappings frequently by those unaware of the "killer" POST back nothing "feature". |
@elastic/kibana-operations - this issue dates back to K4. Any idea what to do with this ? |
Confirming still an issue. |
FYI: We are opening a support ticket with elastic as the workaround noted above by myself when we were on a 2.4 cluster is no longer viable in 5.x as the call for mappings happens everytime now, even if current mappings already exist and without using the refresh field list button. |
@elastic/kibana-app I think this one was mislabeled, so I switched to you. |
i created feature request to track this: #23947 |
When Kibana 4 first starts up, it looks for all indices named
logstash-*
, and then fetches the names of all fields in those indices.In our case, we have ~28 indices, each with hundreds, or sometimes thousands of fields.
My browser tab hit ~1Gb RAM before it died.
The text was updated successfully, but these errors were encountered: