-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HA: Draft plan to reach high availability #1477
Comments
Continue from #1468. The second version of Zowe-HA-Draft.docx. There are pending works marked in the documentation. As suggested by Steve, we need more details regarding hybrid setup of z/OS+Containers. |
The 3rd version of Zowe-HA-Draft.docx and the 2nd version of Zowe-HA-Architecture-View.pptx. After several discussion related to Zowe Launcher, packaging, certificates, cross memory server, the implementation plan is added into the draft. |
Maybe I am misunderstanding, but this document seems to imply that we need to write pretty different code on z/os versus docker. It would be inefficient to maintain both if they are too different, but I think it is possible to have more common code than what I understand from the draft document.
etcd uses gRPC. Does that mean the caching API is a gRPC server? About data storage, it says:
This says activemq is for docker instead of etcd? Is one better than the other? Would we really launch with initial support for all 3, and then also have etcd outside of the caching api? Is there an advantage to making the caching API be a gRPC server versus a REST server? For consistency should etcd be behind the caching api rather than an alternative to the caching api? It will probably make documentation & scripting confusing otherwise. So then we have 4: VSAM, ActiveMQ, Redis, etcd. Do we need 4?
How does kubernetes and openshift replace zowe launcher responsibilities? |
Thanks Sean for the comments. I think adding etcd confused many things. I agree with you and think we shouldn't lock onto one solution like etcd. I will remove the etcd related requirements to make Caching API mode simple/generic and neutral. The docker container section is more about components running into separated containers, not the all-in-one container. I treat the all-in-one container for development purpose and the fastest way to have a taste of Zowe. But for production, i think running components in separated containers is more flexible to scale and lifecycle of those pods can be handled by K8s natively. |
That is not the current plan for docker, as it is almost the opposite of CUPIDS and would therefore require a significant documentation rework, new code, new testing. The current docker is all-in-one because it uses CUPIDS to configure which components run. zowe launcher can be in there to assist with component uptime, and the entire container uptime could be handled by kubernetes or openshift but we should save that for a phase 2 or 3 because it's very different work from the other tasks. |
gRPC or not, the caching API solution seems like it involves a network. |
Caching API will be also registered under Discovery Service and routed by Gateway, so both redundancy and failover will be handled similar like other components. The only downside could be the caching api will be exposed to external, I have security concern on this and will try to find more details. :( And thanks all the valuable feedbacks and suggestions. Here is the 4th version of Zowe-HA-Draft.docx and 3rd version of Zowe-HA-Architecture-View.pptx. These are the changes comparing to previous version(s):
Pending questions:
I'm still trying how to organize the lines, please forgive my PowerPoint skills. |
This is the latest (5th) version of Zowe-HA-Draft.docx. The changes are:
The architecture view is not changed, remains same as 3rd version Zowe-HA-Architecture-View.pptx. |
For the purpose of preparing the draft, this github issue had been fulfilled. We have implementation issues to track progress. |
Taking prior component HA research, create a plan for Zowe to become highly available in sysplex environments.
Proof of concept the plan
The text was updated successfully, but these errors were encountered: