-
Notifications
You must be signed in to change notification settings - Fork 855
[Feature] 7.0 support #554
Comments
We're waiting impatiently 👍 |
Can we please get an update from Elastic on this issue? |
Hi @abraxxa work is in progress to update ansible-elasticsearch to 7.x. |
@jmlrt thanks for the update! |
Great! Will you also do a release of the role? |
@abraxxa The PR merge automatically closed your issue but yes we still need to do the release. |
Bump. Is the 7.0 support official and soon to be "officially released"? I'm good to pull the latest version and use it but some of my colleagues would feel more comfortable with a tag and official release 😃 Thanks! |
Is there anything that we non-Elastic-co folks can do to help this release? The README says that to get started we should do:
But that obviously results in an error because 7.0.1 isn't tagged. Cloning master seems to work as expected, but the mismatch between the docs & the tagged release over a week after the merge is giving me pause. Is there an unseen testing process running that needs to complete? |
Sorry about the delay for the release and mismatch between README file and tagged release. We still have 2 PR to add before adding the release tag:
Unfortunately, I'm not available to work on it this week and should continue next week. Meanwhile using the repository master branch is fine and tested. |
The removal of multi-instance support is quite a large change! |
Why multi-instances support should be removed? |
Multi-instances support was implemented when it was the only solution available to optimize usage of big servers with a lot of memory as well as allow cluster testing locally. It came with a lot of drawbacks as it required to override Elastic official packages behavior by changing standard configuration path, data path and log path (even in single instance configuration) in this ansible role as in the puppet module we also provide. One major point is the need to provide customized init scripts §(systemd or init depending on OS) instead of the ones provided by official packages. This is bringing confusion to a lot of users and is responsible of a lot of the complexity and maintenance burden in our code. Currently you are no more restricted to this only solution when needed to setup more than one instance of elasticsearch on the same host as Elastic is fully supporting Docker containers and is providing official Docker images. We are now recommending Docker containers when there is some use-case which requires installing multiple elasticsearch instances in the same host. That's why we are taking profit of Elasticsearch 7.0 major release to integrate this breaking change and remove multi-instances support. This change should still allow you to change directory paths by overriding ansible variables as well as http and transport ports. For existing multi-instances uses cases, the master branch is already working with elasticsearch 7.X and multi-instance support. You only need to use |
Hello @jmlrt |
Hello @wixaw, Hello @wixaw, |
OK, thanks |
Hi everyone, |
Yeahhh ! Thanks 👍 |
Thanks! |
@jmlrt Thanks and congrats on the release! |
I installed a cluster with this role and I didn't change the default. Can I move
|
Hello @fedelemantuano, We didn't document it and test it but moving some folders is possible of course. Something like this should work:
If your prefer waiting more tests, feel free to create an issue in the repo to request testing and documenting this procedure and we'll do it when we'll have a little more time to dedicate to ansible-elasticsearch role. |
Hi @jmlrt, thanks for your answer. I will open the issue because this can be a common problem. |
In light with the recent availability of dual ADM Epyc Rome CPU(128 Cores) Server architectures with NVMe disk and the trend of ever increasing the number of cores on a single Server, I would say that the multi-instances uses cases should be officially brought back from the dead. I would also argue that K8s, docker or any other form of container orchestration would increase the overhead(by a minimum 3-5%) and latency compared to a bare metal installation. Please do let me know your opinions on this? |
Hi @zez3, As mentionned in #554 (comment) multi-instance requires too much workarounds like rewriting init scripts and configuration files instead of using the ones provided in the official packages and ensuring they are still compliant with each new version. This is not something we can afford too maintain and support as the setup using multi-instances with all these workaround may derive too much of the official packages. Docker overhead is really minimal and I think the advantages outweigh the disadvantages here. |
Hey Julien, Regarding the last sentence I would argue that in our case the docker/container advantages will tip the balance in their favor. We don't play much with the productive cluster. Perhaps once a year or on occasion if there are some new "features"/bug fixes that we really need. I'm not sure what minimal means to you(please feel free to elaborate and corroborate some internal performance testing if available) but to me spending 3-5% at a minimum and >10% at a max from 128 cores results in >12 cores eaten; I could really use them somewhere else. That is just the CPU. Regarding disk IOPS, latency and bandwidth, the overlayfs is another OVERLAY which is a virtual uper level filesystem over the real FS with his respective needed syncs. Please don't get me wrong, I have nothing against containers and I fully agree that they have their usage. I use them myself where needed. |
And then I just found this: |
When will the playbook be ready for 7.0?
The text was updated successfully, but these errors were encountered: