Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleanup and enhance spec #3

Closed
databus23 opened this issue Jul 21, 2017 · 2 comments
Closed

Cleanup and enhance spec #3

databus23 opened this issue Jul 21, 2017 · 2 comments

Comments

@databus23
Copy link
Member

databus23 commented Jul 21, 2017

Right now the TPR is very basic:

apiVersion: kubernikus.sap.cc/v1
kind: Kluster
metadata:
  annotations:
    creator: D062284
  labels:
    account: 8b25871959204ff1a27605b7bcf873f7
  name: test-8b25871959204ff1a27605b7bcf873f7

spec:
  account: 8b25871959204ff1a27605b7bcf873f7
  name: test
status:
  message: Creating Cluster
  state: Creating

So basically the user can only choose the name of the cluster.

A few things I have in mind:

  • The Spec probably needs fields for:
    • Specifying the desired kubernetes version
    • Specifying the number of nodes (kubelets), including size, AZ
    • Specifying the Router and subnet ID all nodes should be deployed to
    • Specifying the region (in case we have more then one)
    • ...
  • We also need a place to store runtime state we generate about the cluster (probaply Status but I'm not sure this fits 100%). At minimum we need to store
    • The open stack server user name, password and domain, author
    • All CA certs and keys
@BugRoger
Copy link
Contributor

BugRoger commented Sep 9, 2017

Most of this has been implemented.

  • We're missing the Kubernetes Version (Versioning for Helm Charts #27, Versioned Helm Releases #29)
  • It's possible to specify multiple node-pools, with image, flavor. Missing AZ (Add AZ to NodePools #30). The intention is to allow virtual/bare-metal nodes via flavors.
  • Router, LBSubnet, NetworkID and others are now configurable. If not configured auto-discovery will be attempted.
  • Region is extracted from the Service Catalog
  • Password needs to move from spec to secret
  • Node/Apiserver-State needs to be somehow reflected into KlusterState, so that we can easily expose it to the UI (Kluster State Reflector #22).
apiVersion: kubernikus.sap.cc/v1
kind: Kluster
metadata:
  labels:
    account: 8b25871959204ff1a27605b7bcf873f7
  name: michi-8b25871959204ff1a27605b7bcf873f7
  namespace: kubernikus
spec:
  kubernikusInfo:
    server: michi-8b25871959204ff1a27605b7bcf873f7.kluster.staging.cloud.sap
    serverURL: https://michi-8b25871959204ff1a27605b7bcf873f7.kluster.staging.cloud.sap
  name: michi
  nodePools:
  - config:
      repair: false
      upgrade: false
    flavor: m1.small
    image: coreos-stable-amd64
    name: af432
    size: 2
  - config:
      repair: false
      upgrade: false
    flavor: m1.tiny
    image: cirros-vmware
    name: ff3ab
    size: 2
  openstackInfo:
    authURL: https://identity-3.staging.cloud.sap
    domain: kubernikus
    lbSubnetID: 7ba2af7a-623c-44c6-9438-d46c19f45abd
    networkID: 2c731ffb-b8ac-48ac-9ccc-1f8c57fb61ce
    password: T>`&eu&9q]5x?m*Tfrj{
    projectID: 8b25871959204ff1a27605b7bcf873f7
    region: staging
    routerID: c1370e95-e45b-4b48-80f5-c5a478118006
    username: kubernikus-michi-8b25871959204ff1a27605b7bcf873f7
status:
  message: Creating Cluster
  state: Creating

@databus23
Copy link
Member Author

As michael said most of this is implemented or has changed since then. Closing this ticket. Further changes to the Kluster spec should be tracked in separate issues

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants