-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Metricbeat] Decrease timeout time of compose.EnsureUp functions #10894
[Metricbeat] Decrease timeout time of compose.EnsureUp functions #10894
Conversation
Why not change the default and overwrite the exceptions? |
After telling me this, I have realized that the |
aca1092
to
014c8c4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like Logstash is now failing. Could you increase the timeout there to 300?
Done. This was my initial idea, to detect which modules needed a higher timeout and why |
jenkins, test this please |
You didn't increase the timeout of Elasticsearch in this PR? Is that on purpose or is it already increased? |
Actually I didn't increase it, I just left default time for all but Kibana and Logstash, but now that you mention it, I guess that ES need a slightly lower timeout than Kibana and Logstash to make those two work properly. My idea was to keep triggering build in this branch before merging, just to check if the current timeouts were correct. |
jenkins, test this please |
This should be rebased as soon as #11230 is merged. |
301ed2f
to
c7828bb
Compare
c7828bb
to
8a5397d
Compare
jenkins, test this please |
2 similar comments
jenkins, test this please |
jenkins, test this please |
@ruflin I think we are safe merging this. All tests are fine and the ones that have been failing were always unrelated to timeouts. WDYT? |
@sayden Overall SGTM. Would be great to get at least 1 full green build. Most PR's I approved recently were all green so I'm worried if this PR stays red, it might be related (can't see a reason at the moment that this should be true). I think you should rebase on master again to get the "fixed" flaky tests in. |
…rrors in Travis (cherry picked from commit 10e6b5acf27569e9dfcb62f1c069e9c62ba9de3e)
8a5397d
to
cbdc1af
Compare
We are green @ruflin 😄 do you want to trigger jenkins once more? |
@sayden Let's get it in. |
Thanks for the help @ruflin and @cachedout |
Refer here for more info