Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to create cluster #5008

Closed
MilanDasek opened this issue Apr 16, 2018 · 12 comments
Closed

Unable to create cluster #5008

MilanDasek opened this issue Apr 16, 2018 · 12 comments

Comments

@MilanDasek
Copy link

Hello,

I am trying to start new cluster in AWS - existing VPC, new subnets, DNS (Route53 public zone)

  1. What kops version are you running? The command kops version, will display
    this information.
    Version 1.9.0 (git-cccd71e67)

  2. What Kubernetes version are you running? kubectl version will print the
    version if a cluster is running or provide the Kubernetes version specified as
    a kops flag.
    tried 1.9.6 as well as 1.10.1

  3. What cloud provider are you using?
    aws

  4. What commands did you run? What is the simplest way to reproduce this issue?
    tried to create cluster

  5. What happened after the commands executed?
    DNS is not set properly - protocube failing

  6. What did you expect to happen?
    cluster up and running

  7. Please provide your cluster manifest. Execute
    kops get --name my.example.com -oyaml to display your cluster manifest.
    You may want to remove your cluster name and other sensitive information.
    below is config yaml

  8. Please run the commands with most verbose logging by adding the -v 10 flag.
    Paste the logs into this report, or in a gist and provide the gist link here.

channels apply channel s3://path/path/addons/bootstrap-channel.yaml --v=10 --yes

output

root@ip-10-2-4-89:/# channels apply channel s3://redacted/redacted/addons/bootstrap-channel.yaml --v=10 --yes
I0416 13:41:14.509528    3601 loader.go:357] Config loaded from file /rootfs/var/lib/kops/kubeconfig
I0416 13:41:14.511430    3601 round_trippers.go:417] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: channels/v0.0.0 (linux/amd64) kubernetes/$Format" https://127.0.0.1/version
I0416 13:41:14.511763    3601 round_trippers.go:436] GET https://127.0.0.1/version  in 0 milliseconds
I0416 13:41:14.511843    3601 round_trippers.go:442] Response Headers:
Error: error querying kubernetes version: Get https://127.0.0.1/version: dial tcp 127.0.0.1:443: getsockopt: connection refused
Usage:
  channels apply channel [flags]

Flags:
  -f, --filename stringSlice   Apply from a local file
      --yes                    Apply update

Global Flags:
      --alsologtostderr                  log to standard error as well as files
      --config string                    config file (default is $HOME/.channels.yaml)
      --log_backtrace_at traceLocation   when logging hits line file:N, emit a stack trace (default :0)
      --log_dir string                   If non-empty, write log files in this directory
      --logtostderr                      log to standard error instead of files (default false)
      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)
  -v, --v Level                          log level for V logs (default 0)
      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging


error querying kubernetes version: Get https://127.0.0.1/version: dial tcp 127.0.0.1:443: getsockopt: connection refused
  1. Anything else do we need to know?
    My config
apiVersion: kops/v1alpha2
kind: Cluster
metadata:
  name: redacted
  creationTimestamp: "2018-03-26T08:00:00Z"
spec:
  channel: stable
  kubernetesVersion: v1.10.1
  cloudProvider: aws
  cloudLabels:
    service: kube-dev
  configBase: s3://redacted/redacted
  dnsZone: ZONE_ID
  masterInternalName: api.internal.redacted
  masterPublicName: api.redacted
  kubeControllerManager:
    horizontalPodAutoscalerUseRestClients: true
  etcdClusters:
  - etcdMembers:
    - instanceGroup: master-1-eu-central-1a
      name: master-1-eu-central-1a
    - instanceGroup: master-2-eu-central-1b
      name: master-2-eu-central-1b
    - instanceGroup: master-3-eu-central-1a
      name: master-3-eu-central-1a
    name: main
  - etcdMembers:
    - instanceGroup: master-1-eu-central-1a
      name: master-1-eu-central-1a
    - instanceGroup: master-2-eu-central-1b
      name: master-2-eu-central-1b
    - instanceGroup: master-3-eu-central-1a
      name: master-3-eu-central-1a
    name: events
  fileAssets:
  - name: audit-policy-file
    path: /srv/kubernetes/audit-policy.yaml
    roles:
    - Master
    content: |
      apiVersion: audit.k8s.io/v1beta1
      kind: Policy
      omitStages:
        - "RequestReceived"
      rules:
        - level: RequestResponse
          resources:
          - group: ""
            resources: ["pods"]
        - level: Metadata
          resources:
          - group: ""
            resources: ["pods/log", "pods/status"]
        - level: None
          resources:
          - group: ""
            resources: ["configmaps"]
            resourceNames: ["controller-leader"]
        - level: None
          users: ["system:kube-proxy"]
          verbs: ["watch"]
          resources:
          - group: ""
            resources: ["endpoints", "services"]
        - level: None
          userGroups: ["system:authenticated"]
          nonResourceURLs:
          - "/api*"
          - "/version"
        - level: Request
          resources:
          - group: ""
            resources: ["configmaps"]
          namespaces: ["kube-system"]
        - level: Metadata
          resources:
          - group: ""
            resources: ["secrets", "configmaps"]
        - level: Request
          resources:
          - group: ""
          - group: "extensions"
        - level: Metadata
          omitStages:
            - "RequestReceived"
  kubeAPIServer:
    auditLogPath: /var/log/kube-apiserver-audit.log
    auditLogMaxAge: 10
    auditLogMaxBackups: 10
    auditLogMaxSize: 100
    auditPolicyFile: /srv/kubernetes/audit-policy.yaml
    runtimeConfig:
      batch/v2alpha1: "true"
      autoscaling/v2beta1: "true"
    oidcIssuerURL: https://sts.windows.net/redacted/
    oidcClientID: "spn:redacted"
    oidcUsernameClaim: upn
  kubelet:
    enableCustomMetrics: true
    kubeReserved:
        cpu: "100m"
        memory: "256Mi"
        ephemeral-storage: "1Gi"
    kubeReservedCgroup: "/kube-reserved"
    systemReserved:
        cpu: "100m"
        memory: "768Mi"
        ephemeral-storage: "1Gi"
    systemReservedCgroup: "/system-reserved"
    enforceNodeAllocatable: "pods"
    featureGates:
      CPUManager: "true"
      CustomPodDNS: "true"
      DevicePlugins: "true"
      ExperimentalCriticalPodAnnotation: "true"
      HugePages: "true"
      Intializers: "true"
      KubeletConfigFile: "true"
      MountPropagation: "true"
      PodPriority: "true"
      PVCProtection: "true"
      ResourceLimitsPriorityFunction: "true"
      ServiceNodeExclusion: "true"
      TaintBasedEvictions: "true"
      TaintNodesByCondition: "true"
      VolumeScheduling: "true"
  authorization:
    rbac: {}
  networkID: vpc-redacted
  networkCIDR: 10.2.0.0/21
  networking:
    kubenet: {}
  nonMasqueradeCIDR: 100.2.0.0/18
  sshAccess:
  - 0.0.0.0/0
  kubernetesApiAccess:
  - 0.0.0.0/0
  topology:
    masters: public
    nodes: public
  subnets:
  - cidr: 10.2.4.0/24
    name: eu-central-1a
    type: Public
    zone: eu-central-1a
  - cidr: 10.2.5.0/24
    name: eu-central-1b
    type: Public
    zone: eu-central-1b

and

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2017-01-30T08:00:00Z"
  labels:
    kops.k8s.io/cluster: redacted
  name: master-1-eu-central-1a
spec:
  associatePublicIp: true
  image: kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-02-08
  machineType: m3.medium
  maxSize: 1
  minSize: 1
  role: Master
  subnets:
  - eu-central-1a

and

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2017-01-30T08:00:00Z"
  labels:
    kops.k8s.io/cluster: redacted
  name: nodes
spec:
  associatePublicIp: true
  image: kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-02-08
  machineType: r4.xlarge
  maxSize: 1
  minSize: 1
  role: Node
  subnets:
  - eu-central-1a
  - eu-central-1b
@huang-jy
Copy link

Do you have more than one VPC in your account?

@MilanDasek
Copy link
Author

yes, but I do specify ID in config

...
networkID: **vpc-redacted**
networkCIDR: 10.2.0.0/21
...

@huang-jy
Copy link

Can you just double-check to make sure the security group you're trying to add is associated with the correct VPC?

@huang-jy
Copy link

Oh, and make sure your CIDR matches or is a subset of the CIDR of your VPC

@MilanDasek
Copy link
Author

I am not sure what do you mean?
I have SG attached VPC (because I am running another Kube cluster on 2 different subnets)
This new Kube is creating new subnets
subnets:

  • cidr: 10.2.4.0/24
    name: eu-central-1a
    type: Public
    zone: eu-central-1a
  • cidr: 10.2.5.0/24
    name: eu-central-1b
    type: Public
    zone: eu-central-1b

the error seems local anyway - 127.0.0.1
Error: error querying kubernetes version: Get https://127.0.0.1/version: dial tcp 127.0.0.1:443: getsockopt: connection refused

@huang-jy
Copy link

Does ls ~/.kube error out or return a list of files?

@MilanDasek
Copy link
Author

Oh, and make sure your CIDR matches or is a subset of the CIDR of your VPC

[MD]: yes, correct.

I have to clarify. Nodes are created and DNS is updated accordingly for nodes (10.x.x.x). Only api DNS is not. Protokube cannot start on masters - with the error above.

I also have to say I have created working cluster many times before using same config. Just lower kube version and without some new features (like f.e. - enableCustomMetrics: true)

M

@MilanDasek
Copy link
Author

FIXED -

#3551
https://github.com/kubernetes/kops/blob/master/docs/releases/1.9-NOTES.md

authorization:
rbac: {}

@MilanDasek
Copy link
Author

well - cluster starts, but there are default cluster roles missing - cluster-admin, admin, edit, view etc.

something is wrong or I miss something

@huang-jy
Copy link

Well, these are the ones I have

image

@MilanDasek
Copy link
Author

MilanDasek commented Apr 20, 2018

yes, seems same on my side - we don't have weave, but rest seems similar

but where are defaults?

@huang-jy
Copy link

Good question ^_^ since this cluster is a personal one, I don't have any extra roles set -- maybe they're not added as standard?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants