Skip to content
This repository has been archived by the owner on Sep 16, 2019. It is now read-only.

Canonical Zone ID for endpoint: is not found #85

Open
onzo-operry opened this issue Feb 22, 2017 · 16 comments
Open

Canonical Zone ID for endpoint: is not found #85

onzo-operry opened this issue Feb 22, 2017 · 16 comments

Comments

@onzo-operry
Copy link

onzo-operry commented Feb 22, 2017

Hi,

I have been banging my head at this for hours now, really not sure what I am doing so wrong.
so I am trying the simplest setup where the ingress hosts should create r53 records in aws i assume?

any hints would be welcome

if I remove the ingress-nginx , mate doesn't crash, but then again it doesn't really do anything either around creating records

here is the log output

time="2017-02-22T21:22:33Z" level=info msg="[AWS] Listening for events..." 
time="2017-02-22T21:22:33Z" level=debug msg="[Synchronize] Sleeping for 1m0s..." 
time="2017-02-22T21:22:33Z" level=info msg="ADDED: default/ingress" 
time="2017-02-22T21:22:33Z" level=info msg="[AWS] Processing (a1.prodb.onzo.cloud., 10.55.53.140, )\n" 
time="2017-02-22T21:22:33Z" level=info msg="ADDED: kube-system/kube-dns" 
time="2017-02-22T21:22:33Z" level=warning msg="[Service] The load balancer of service 'kube-system/kube-dns' does not have any ingress." 
time="2017-02-22T21:22:33Z" level=info msg="ADDED: infra/elasticsearch-internal" 
time="2017-02-22T21:22:33Z" level=warning msg="[Service] The load balancer of service 'infra/elasticsearch-internal' does not have any ingress." 
time="2017-02-22T21:22:33Z" level=info msg="ADDED: infra/infra-ingress-ingress" 
time="2017-02-22T21:22:35Z" level=debug msg="Getting a page of ALBs of length: 0" 
time="2017-02-22T21:22:35Z" level=debug msg="Getting a page of ELBs of length: 4" 
time="2017-02-22T21:22:35Z" level=error msg="Canonical Zone ID for endpoint:  is not found" 
panic: runtime error: index out of range

goroutine 20 [running]:
panic(0x14d63a0, 0xc420016060)
        /usr/local/go/src/runtime/panic.go:500 +0x1a1
github.com/zalando-incubator/mate/consumers.(*awsConsumer).Process(0xc42040f300, 0xc42034b200, 0xc420379e68, 0x3)
        /home/master/workspace/teabag_mate_master-5HM4GNJPGTJMNQYAZYN6JFXPZ2TWYP4JPPHRCHUMWCZK6LHVMNQA/mate/_jenkins_build/go/src/github.com/zalando-incubator/mate/consumers/aws.go:195 +0x45a
github.com/zalando-incubator/mate/consumers.(*awsConsumer).Consume(0xc42040f300, 0xc4201cc1e0, 0xc4201cc240, 0xc4201cc2a0, 0xc4201a57b0)
        /home/master/workspace/teabag_mate_master-5HM4GNJPGTJMNQYAZYN6JFXPZ2TWYP4JPPHRCHUMWCZK6LHVMNQA/mate/_jenkins_build/go/src/github.com/zalando-incubator/mate/consumers/aws.go:173 +0x2eb
github.com/zalando-incubator/mate/consumers.(*SyncedConsumer).Consume(0xc42040f360, 0xc4201cc1e0, 0xc4201cc240, 0xc4201cc2a0, 0xc4201a57b0)
        <autogenerated>:10 +0x72
github.com/zalando-incubator/mate/controller.(*Controller).consumeEndpoints(0xc4201a5770)
        /home/master/workspace/teabag_mate_master-5HM4GNJPGTJMNQYAZYN6JFXPZ2TWYP4JPPHRCHUMWCZK6LHVMNQA/mate/_jenkins_build/go/src/github.com/zalando-incubator/mate/controller/controller.go:124 +0x58
created by github.com/zalando-incubator/mate/controller.(*Controller).Watch
        /home/master/workspace/teabag_mate_master-5HM4GNJPGTJMNQYAZYN6JFXPZ2TWYP4JPPHRCHUMWCZK6LHVMNQA/mate/_jenkins_build/go/src/github.com/zalando-incubator/mate/controller/controller.go:116 +0x61

ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress
spec:
  rules:
  - host: a1.prodb.onzo.cloud
    http:
      paths:
      - backend:  
          serviceName: shop-svc
          servicePort: 80

service.yaml

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-1:11111111111111:certificate/87d69a55-abcd-4cb6-b4b4-2f211f09241d
    service.betaubernetes.io/aws-load-balancer-ssl-ports: "443"
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
  name: ingress-service
  annotations:
    zalando.org/dnsname: annotated-nginx.prod.onzo.cloud
  labels:
    component: ingress-service
spec:
  selector:
    component: nginx-ingress
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 80
    name: http
  - port: 443
    targetPort: 443
    name: https

nginx-rc.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-ingress-controller
  labels:
    component: nginx-ingress
spec:
  replicas: 1
  selector:
    component: nginx-ingress
  template:
    metadata:
      labels:
        component: nginx-ingress
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
        name: nginx-ingress
        imagePullPolicy: Always
        readinessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          timeoutSeconds: 1
        # use downward API
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        ports:
        - containerPort: 80
          hostPort: 80
        - containerPort: 443
          hostPort: 443
        args:
        - /nginx-ingress-controller
        - --default-backend-service=kube-system/default-http-backend

dummy backend

apiVersion: v1                                                                                                                                                                                                                     
kind: Service                                                                                                                                                                                                                       
metadata:                                                                                                                                                                                                                            
  name: shop-svc                                                                                                                                                                                                                       
  labels:                                                                                                                                                                                                                                
    app: shop                                                                                                                                                                                                                             
spec:                                                                                                                                                                                                                                      
  type: NodePort                                                                                                                                                                                                                              
  ports:                                                                                                                                                                                                                                       
  - port: 80                                                                                                                                                                                                                                    
    protocol: TCP                                                                                                                                                                                                                                 
    name: http
  selector:
    app: nginx
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
spec:
  replicas: 1
  selector:
    app: nginx
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

mate.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mate
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mate
      annotations:
        iam.amazonaws.com/role: route53-kubernetes
    spec:
      containers:
      - name: mate
        image: registry.opensource.zalan.do/teapot/mate:v0.5.1
        env:
        - name: AWS_REGION
          value: eu-west-1
        args:
        - --producer=kubernetes
        - --kubernetes-format={{.Namespace}}-{{.Name}}.prodb.onzo.cloud
        - --consumer=aws
        - --aws-record-group-id=my-cluster
        - --debug

logs running sync-only

time="2017-02-22T21:33:00Z" level=debug msg="[Synchronize] Sleeping for 1m0s..."
[operry@peek01 default]$ kubectl logs mate-2617052176-2n5ws
time="2017-02-22T21:33:00Z" level=debug msg="[Synchronize] Sleeping for 1m0s..."
[operry@peek01 default]$ kubectl logs mate-2617052176-2n5ws
time="2017-02-22T21:33:00Z" level=debug msg="[Synchronize] Sleeping for 1m0s..."
[operry@peek01 default]$ kubectl logs mate-2617052176-2n5ws
time="2017-02-22T21:33:00Z" level=debug msg="[Synchronize] Sleeping for 1m0s..."
time="2017-02-22T21:34:00Z" level=info msg="[Synchronize] Synchronizing DNS entries..."
time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'default/kubernetes' does not have any ingress."
time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'default/shop-svc' does not have any ingress."
time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'infra/default-http-backend' does not have any ingress."
time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'infra/elasticsearch-discovery' does not have any ingress."
time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'infra/elasticsearch-internal' does not have any ingress."
time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'infra/idb-1-influxdb' does not have any ingress."
time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'infra/kibana-service' does not have any ingress."
time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'infra/logstash-internal' does not have any ingress."
time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'kube-system/default-http-backend' does not have any ingress."
time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'kube-system/kube-dns' does not have any ingress."
time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'kube-system/kubernetes-dashboard' does not have any ingress."
time="2017-02-22T21:34:00Z" level=warning msg="[Service] The load balancer of service 'kube-system/tiller-deploy' does not have any ingress."
time="2017-02-22T21:34:01Z" level=debug msg="Getting a page of ALBs of length: 0"
time="2017-02-22T21:34:01Z" level=debug msg="Getting a page of ELBs of length: 4"
time="2017-02-22T21:34:01Z" level=error msg="Canonical Zone ID for endpoint: is not found"
time="2017-02-22T21:34:02Z" level=warning msg="Hosted zone for endpoint: annotated-nginx.prod.onzo.cloud. is not found. Skipping record..."
time="2017-02-22T21:34:02Z" level=debug msg="Getting a list of AWS RRS of length: 16"
time="2017-02-22T21:34:02Z" level=debug msg="Records to be upserted: [{\n AliasTarget: {\n DNSName: "afc504560f9aa21e69d410a82ab2f03e-1278601154.eu-west-1.elb.amazonaws.com.",\n EvaluateTargetHealth: true,\n HostedZoneId: "Z32OAAAAAAAAAA"\n },\n Name: "infra-infra-ingress-ingress.prodb.onzo.cloud.",\n Type: "A"\n} {\n Name: "infra-infra-ingress-ingress.prodb.onzo.cloud.",\n ResourceRecords: [{\n Value: "\"mate:my-cluster\""\n }],\n TTL: 300,\n Type: "TXT"\n}]"
time="2017-02-22T21:34:02Z" level=debug msg="Records to be deleted: []"
time="2017-02-22T21:34:02Z" level=debug msg="[Synchronize] Sleeping for 1m0s..."

@onzo-operry onzo-operry changed the title msg="Canonical Zone ID for endpoint: is not found" Canonical Zone ID for endpoint: is not found Feb 22, 2017
@ideahitme
Copy link
Contributor

ideahitme commented Feb 22, 2017

@onzo-operry please specify the version of mate you are running - nevermind, I see you are running 0.5.1, please try master branch. First thing I would suggest you to run the latest master code without --sync-only enabled and check if crash persists. And please format your original question :)
As for why it is not created, do you have prod.onzo.cloud hosted zone in your AWS account ?

@ideahitme
Copy link
Contributor

you should be getting two RRS created:

  1. annotated-nginx.prod.onzo.cloud - as specified in your service
  2. a1.prodb.onzo.cloud - as specified in your ingress rule hosts

So to double confirm none of these two are created ?

@onzo-operry
Copy link
Author

Sorry about the formatting, it's been a long day.....
I will try and compile off master tomorow, but I can confirm that

  1. prodb.onzo.cloud zone is created , Some of my experiments with mate have created records if I decorate the service yaml with annotations.
  2. with the configuration above , I do not get any records created before it bombs out

@onzo-operry
Copy link
Author

onzo-operry commented Feb 22, 2017

So running locally from
commit 4a5b4eb
Merge: 4f86371 7dc09b2
Date: Tue Feb 14 11:12:17 2017 +0100

Merge pull request #84 from linki/annotations                                                                                                                                                                                                  
                                                                                                                                                                                                                                               
make mate play well with other ext dns controllers     
mate --producer=kubernetes --kubernetes-format={{.Namespace}}-{{.Name}}.prodb.onzo.cloud --consumer=aws --aws-record-group-id=my-cluster --debug --kubernetes-server=http://127.0.0.1:8001
INFO[0000] [AWS] Listening for events...                
DEBU[0000] [Synchronize] Sleeping for 1m0s...           
INFO[0000] ADDED: kube-system/default-http-backend      
WARN[0000] [Service] The load balancer of service 'kube-system/default-http-backend' does not have any ingress. 
INFO[0000] ADDED: default/shop-svc                      
WARN[0000] [Service] The load balancer of service 'default/shop-svc' does not have any ingress. 
INFO[0000] ADDED: infra/idb-1-influxdb                  
WARN[0000] [Service] The load balancer of service 'infra/idb-1-influxdb' does not have any ingress. 
INFO[0000] ADDED: kube-system/tiller-deploy             
WARN[0000] [Service] The load balancer of service 'kube-system/tiller-deploy' does not have any ingress. 
INFO[0000] ADDED: infra/default-http-backend            
WARN[0000] [Service] The load balancer of service 'infra/default-http-backend' does not have any ingress. 
INFO[0000] ADDED: default/kubernetes                    
WARN[0000] [Service] The load balancer of service 'default/kubernetes' does not have any ingress. 
INFO[0000] ADDED: kube-system/kubernetes-dashboard      
WARN[0000] [Service] The load balancer of service 'kube-system/kubernetes-dashboard' does not have any ingress. 
INFO[0000] ADDED: infra/elasticsearch-discovery         
WARN[0000] [Service] The load balancer of service 'infra/elasticsearch-discovery' does not have any ingress. 
INFO[0000] ADDED: infra/logstash-internal               
WARN[0000] [Service] The load balancer of service 'infra/logstash-internal' does not have any ingress. 
INFO[0000] ADDED: infra/kibana-service                  
WARN[0000] [Service] The load balancer of service 'infra/kibana-service' does not have any ingress. 
INFO[0000] ADDED: kube-system/kube-dns                  
WARN[0000] [Service] The load balancer of service 'kube-system/kube-dns' does not have any ingress. 
INFO[0000] ADDED: default/ingress                       
INFO[0000] [AWS] Processing (a1.prodb.onzo.cloud., 10.55.53.140, )
 
INFO[0000] ADDED: infra/elasticsearch-internal          
WARN[0000] [Service] The load balancer of service 'infra/elasticsearch-internal' does not have any ingress. 
INFO[0000] ADDED: infra/infra-ingress-ingress           
DEBU[0001] Getting a page of ALBs of length: 0          
DEBU[0001] Getting a page of ELBs of length: 4          
ERRO[0001] Canonical Zone ID for endpoint:  is not found
INFO[0001] [AWS] Processing (infra-infra-ingress-ingress.prodb.onzo.cloud., , afc504560f92911e69d4xxxxxxxxxxx-11111111111.eu-west-1.elb.amazonaws.com)
 
ERRO[0001] Failed to process endpoint. Alias could not be constructed for: a1.prodb.onzo.cloud.:. 
INFO[0001] ADDED: default/ingress-service               
DEBU[0001] Getting a page of ALBs of length: 0          
DEBU[0001] Getting a page of ELBs of length: 4          
INFO[0001] [AWS] Processing (default-ingress-service.prodb.onzo.cloud., , a672c7c16f93c11e6a46exxxxxxxxx-111111111111.eu-west-1.elb.amazonaws.com)
 
DEBU[0001] Getting a page of ALBs of length: 0          
DEBU[0001] Getting a page of ELBs of length: 4          
^CINFO[0018] Shutdown signal received, exiting...         
INFO[0018] [Ingress] Exited monitoring loop.            
INFO[0018] [Synchronize] Exited synchronization loop.   
INFO[0018] [Kubernetes] Exited monitoring loop.         
INFO[0018] [AWS] Exited consuming loop.                 
INFO[0018] [Service] Exited monitoring loop.            
INFO[0018] [Noop] Exited monitoring loop.               

This has created 2 A and 2 TXT's

infra-infra-ingress-ingress.prodb.onzo.cloud
default-ingress-service.prodb.onzo.cloud

@ideahitme
Copy link
Contributor

@onzo-operry annotated-nginx.prod.onzo.cloud this one will be created if you have prod.onzo.cloud (note missing b in prod) hosted zone as well.

I will create and deploy a new release which should fix the crashing problem - I believe it is already fixed in the master branch, but not released for some reason :(

@onzo-operry
Copy link
Author

onzo-operry commented Feb 22, 2017

So the annotation was commented out for the above run , I put it back in and applied the yaml with the correct "prodb", and have pasted the output below (this is running off master)

annotated-nginx .prodb.onzo.cloud. has been created A & TXT

but a1.prodb.onzo.cloud has not.

Question , do the ingress services need to be annotated for this to work?

INFO[0000] [AWS] Listening for events...                
DEBU[0000] [Synchronize] Sleeping for 1m0s...           
INFO[0000] ADDED: kube-system/tiller-deploy             
WARN[0000] [Service] The load balancer of service 'kube-system/tiller-deploy' does not have any ingress. 
INFO[0000] ADDED: infra/elasticsearch-discovery         
WARN[0000] [Service] The load balancer of service 'infra/elasticsearch-discovery' does not have any ingress. 
INFO[0000] ADDED: infra/infra-ingress-ingress           
INFO[0000] ADDED: default/ingress-service               
INFO[0000] [AWS] Processing (infra-infra-ingress-ingress.prodb.onzo.cloud., , afc504560f92911e69d410a82-1111111111.eu-west-1.elb.amazonaws.com)
 
INFO[0000] ADDED: default/ingress                       
DEBU[0001] Getting a page of ALBs of length: 0          
DEBU[0001] Getting a page of ELBs of length: 4          
WARN[0001] Record [name=infra-infra-ingress-ingress.prodb.onzo.cloud.] could not be created, another record with same name already exists 
INFO[0001] [AWS] Processing (annotated-nginx.prodb.onzo.cloud, , a672c7c16f93c11e6a46e-1111111.eu-west-1.elb.amazonaws.com)
 
INFO[0001] ADDED: default/shop-svc                      
WARN[0001] [Service] The load balancer of service 'default/shop-svc' does not have any ingress. 
INFO[0001] ADDED: kube-system/kube-dns                  
WARN[0001] [Service] The load balancer of service 'kube-system/kube-dns' does not have any ingress. 
INFO[0001] ADDED: kube-system/kubernetes-dashboard      
WARN[0001] [Service] The load balancer of service 'kube-system/kubernetes-dashboard' does not have any ingress. 
INFO[0001] ADDED: default/kubernetes                    
WARN[0001] [Service] The load balancer of service 'default/kubernetes' does not have any ingress. 
INFO[0001] ADDED: kube-system/default-http-backend      
WARN[0001] [Service] The load balancer of service 'kube-system/default-http-backend' does not have any ingress. 
INFO[0001] ADDED: infra/idb-1-influxdb                  
WARN[0001] [Service] The load balancer of service 'infra/idb-1-influxdb' does not have any ingress. 
INFO[0001] ADDED: infra/kibana-service                  
WARN[0001] [Service] The load balancer of service 'infra/kibana-service' does not have any ingress. 
INFO[0001] ADDED: infra/default-http-backend            
WARN[0001] [Service] The load balancer of service 'infra/default-http-backend' does not have any ingress. 
INFO[0001] ADDED: infra/elasticsearch-internal          
WARN[0001] [Service] The load balancer of service 'infra/elasticsearch-internal' does not have any ingress. 
INFO[0001] ADDED: infra/logstash-internal               
WARN[0001] [Service] The load balancer of service 'infra/logstash-internal' does not have any ingress. 
DEBU[0002] Getting a page of ALBs of length: 0          
DEBU[0002] Getting a page of ELBs of length: 4          
INFO[0002] [AWS] Processing (a1.prodb.onzo.cloud., 10.55.53.140, )
 
DEBU[0002] Getting a page of ALBs of length: 0          
DEBU[0002] Getting a page of ELBs of length: 4          
ERRO[0002] Canonical Zone ID for endpoint:  is not found 
ERRO[0002] Failed to process endpoint. Alias could not be constructed for: a1.prodb.onzo.cloud.:. 

INFO[0060] [Synchronize] Synchronizing DNS entries...   
INFO[0060] ADDED: infra/logstash-internal               
WARN[0060] [Service] The load balancer of service 'infra/logstash-internal' does not have any ingress. 
INFO[0060] ADDED: infra/infra-ingress-ingress           
INFO[0060] [AWS] Processing (infra-infra-ingress-ingress.prodb.onzo.cloud., , afc504560f92911e69d410e-1111111111.eu-west-1.elb.amazonaws.com)
 
INFO[0060] ADDED: kube-system/kube-dns                  
WARN[0060] [Service] The load balancer of service 'kube-system/kube-dns' does not have any ingress. 
INFO[0060] ADDED: infra/kibana-service                  
WARN[0060] [Service] The load balancer of service 'infra/kibana-service' does not have any ingress. 
INFO[0060] ADDED: default/ingress-service               
WARN[0060] [Service] The load balancer of service 'default/kubernetes' does not have any ingress. 
WARN[0060] [Service] The load balancer of service 'default/shop-svc' does not have any ingress. 
WARN[0060] [Service] The load balancer of service 'infra/default-http-backend' does not have any ingress. 
WARN[0060] [Service] The load balancer of service 'infra/elasticsearch-discovery' does not have any ingress. 
WARN[0060] [Service] The load balancer of service 'infra/elasticsearch-internal' does not have any ingress. 
WARN[0060] [Service] The load balancer of service 'infra/idb-1-influxdb' does not have any ingress. 
WARN[0060] [Service] The load balancer of service 'infra/kibana-service' does not have any ingress. 
WARN[0060] [Service] The load balancer of service 'infra/logstash-internal' does not have any ingress. 
WARN[0060] [Service] The load balancer of service 'kube-system/default-http-backend' does not have any ingress. 
WARN[0060] [Service] The load balancer of service 'kube-system/kube-dns' does not have any ingress. 
WARN[0060] [Service] The load balancer of service 'kube-system/kubernetes-dashboard' does not have any ingress. 
WARN[0060] [Service] The load balancer of service 'kube-system/tiller-deploy' does not have any ingress. 
INFO[0060] ADDED: default/ingress                       
DEBU[0060] Getting a page of ALBs of length: 0          
DEBU[0060] Getting a page of ELBs of length: 4          
ERRO[0060] Canonical Zone ID for endpoint:  is not found 
DEBU[0061] Getting a page of ALBs of length: 0          
DEBU[0061] Getting a page of ELBs of length: 4          
WARN[0061] Record [name=infra-infra-ingress-ingress.prodb.onzo.cloud.] could not be created, another record with same name already exists 
INFO[0061] [AWS] Processing (annotated-nginx.prodb.onzo.cloud, , a672c7c16f93c11e6a4-1111111111.eu-west-1.elb.amazonaws.com)
 
INFO[0061] ADDED: default/kubernetes                    
WARN[0061] [Service] The load balancer of service 'default/kubernetes' does not have any ingress. 
INFO[0061] ADDED: kube-system/tiller-deploy             
WARN[0061] [Service] The load balancer of service 'kube-system/tiller-deploy' does not have any ingress. 
INFO[0061] ADDED: infra/elasticsearch-internal          
WARN[0061] [Service] The load balancer of service 'infra/elasticsearch-internal' does not have any ingress. 
INFO[0061] ADDED: infra/default-http-backend            
WARN[0061] [Service] The load balancer of service 'infra/default-http-backend' does not have any ingress. 
INFO[0061] ADDED: default/shop-svc                      
WARN[0061] [Service] The load balancer of service 'default/shop-svc' does not have any ingress. 
INFO[0061] ADDED: kube-system/kubernetes-dashboard      
WARN[0061] [Service] The load balancer of service 'kube-system/kubernetes-dashboard' does not have any ingress. 
INFO[0061] ADDED: infra/elasticsearch-discovery         
WARN[0061] [Service] The load balancer of service 'infra/elasticsearch-discovery' does not have any ingress. 
INFO[0061] ADDED: kube-system/default-http-backend      
WARN[0061] [Service] The load balancer of service 'kube-system/default-http-backend' does not have any ingress. 
INFO[0061] ADDED: infra/idb-1-influxdb                  
WARN[0061] [Service] The load balancer of service 'infra/idb-1-influxdb' does not have any ingress. 
DEBU[0061] Getting a list of AWS RRS of length: 22      
DEBU[0061] Records to be upserted:  []                  
DEBU[0061] Records to be deleted:  [{
  AliasTarget: {
    DNSName: "a672c7c16f93c11e6a46e02-11111111111.eu-west-1.elb.amazonaws.com.",
    EvaluateTargetHealth: true,
    HostedZoneId: "Z32O12XQLXXXXX"
  },
  Name: "default-ingress-service.prodb.onzo.cloud.",
  Type: "A"
} {
  Name: "default-ingress-service.prodb.onzo.cloud.",
  ResourceRecords: [{
      Value: "\"mate:my-cluster\""
    }],
  TTL: 300,
  Type: "TXT"
}] 
DEBU[0061] Getting a page of ALBs of length: 0          
DEBU[0061] Getting a page of ELBs of length: 4          
DEBU[0061] [Synchronize] Sleeping for 1m0s...           
WARN[0061] Record [name=annotated-nginx.prodb.onzo.cloud] could not be created, another record with same name already exists 
INFO[0061] [AWS] Processing (a1.prodb.onzo.cloud., 10.55.53.140, )
 
DEBU[0061] Getting a page of ALBs of length: 0          
DEBU[0061] Getting a page of ELBs of length: 4          
ERRO[0061] Canonical Zone ID for endpoint:  is not found 
ERRO[0061] Failed to process endpoint. Alias could not be constructed for: a1.prodb.onzo.cloud.:. 

@ideahitme
Copy link
Contributor

ideahitme commented Feb 22, 2017

I reckon the reason is ELB address is not reported on the ingress resource. Could you please try:

kubectl get -o json ingress ingress 

and paste the output here. I am not really familiar with the nginx-ingress-controller, but I guess they are reporting back the POD cluster IP, which obviously cannot be populated as the RRS target (and obviously there is no associated canonical hosted zone :P )

@onzo-operry
Copy link
Author

So adding a little debugging to the code I get this for the value of the variable ep which I guess mirrors the json below

DEBU[0121] &pkg.Endpoint{DNSName:"a1.prodb.onzo.cloud.", IP:"10.55.53.140", Hostname:""}
{                                                                                                                                                                                                                                                  
    "apiVersion": "extensions/v1beta1",                                                                                                                                                                                                            
    "kind": "Ingress",                                                                                                                                                                                                                             
    "metadata": {                                                                                                                                                                                                                                  
        "annotations": {                                                                                                                                                                                                                           
            "kubectl.kubernetes.io/last-applied-configuration": "{\"kind\":\"Ingress\",\"apiVersion\":\"extensions/v1beta1\",\"metadata\":{\"name\":\"ingress\",\"creationTimestamp\":null},\"spec\":{\"rules\":[{\"host\":\"a1.prodb.onzo.cloud\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"shop-svc\",\"servicePort\":80}}]}}]},\"status\":{\"loadBalancer\":{}}}"                                                                                                                       
        },                                                                                                                                                                                                                                         
        "creationTimestamp": "2017-02-22T20:20:55Z",                                                                                                                                                                                               
        "generation": 2,                                                                                                                                                                                                                           
        "name": "ingress",                                                                                                                                                                                                                         
        "namespace": "default",                                                                                                                                                                                                                    
        "resourceVersion": "2251915",                                                                                                                                                                                                              
        "selfLink": "/apis/extensions/v1beta1/namespaces/default/ingresses/ingress",                                                                                                                                                               
        "uid": "6a5f769d-f93c-11e6-9d41-0a82ab2f03e5"                                                                                                                                                                                              
    },                                                                                                                                                                                                                                             
    "spec": {
        "rules": [
            {
                "host": "a1.prodb.onzo.cloud",
                "http": {
                    "paths": [
                        {
                            "backend": {
                                "serviceName": "shop-svc",
                                "servicePort": 80
                            }
                        }
                    ]
                }
            }
        ]
    },
    "status": {
        "loadBalancer": {
            "ingress": [
                {
                    "ip": "10.55.53.140"
                }
            ]
        }
    }
}

@onzo-operry
Copy link
Author

Reading much earlier today made me assume that the nginx ingress would work #77 and I am on the same version

@ideahitme
Copy link
Contributor

ideahitme commented Feb 22, 2017

** "ip": "10.55.53.140" **

Yes, it is a compatibility issue with nginx-ingress-controller. Unfortunately this information is not enough to set up an Alias A record on Route53 :(


There is an alternative setup which we are using to enable ingress on our clusters, we were planning to document it here in next few days. Basically we use https://github.com/zalando-incubator/kube-ingress-aws-controller to provision SSL enabled ALBs and populate the ingress resource field with the ALB full DNS address. ALB points to the internal proxy https://github.com/zalando/skipper (running as daemon set) which enables route trafficking within the cluster.

It fits perfectly with Mate and makes the setup super easy, as I have said we will document it properly, but feel free to ask questions, if you have any :)

@onzo-operry
Copy link
Author

ok thanks man , much appreciate your time, I will take a look at the links above and have a play.

@onzo-operry
Copy link
Author

onzo-operry commented Feb 23, 2017

So thinking about it this morning, one of the things might be the fact that we use aws with private networking, so the nodes don't have public addresses , the ip address above is a private IP of one of the k8s nodes, I was expecting mate would be populating the DNS entry of the ingress with the ELB address, or is that not how it works?

@ideahitme
Copy link
Contributor

ideahitme commented Feb 23, 2017

@onzo-operry this is more related to the way nginx-controller works, because the address is reported from the nginx-controller. Unfortunately since in Route53 we only create Alias records, public/private IP cannot be supported. This is the part from Amazon official documentation

An alias resource record set can only point to a CloudFront distribution, an Elastic Beanstalk environment, an ELB load balancer, an Amazon S3 bucket that is configured as a static website, or another resource record set in the same Amazon Route 53 hosted zone in which you're creating the alias resource record set.

@ideahitme
Copy link
Contributor

@onzo-operry could you please give a try to latest release v0.6.0 and let us know if it helped you with the problem. It should create an A record pointing to the IP address specified in ingress resource :)

@linki
Copy link
Owner

linki commented Feb 23, 2017

v0.6.0 makes mate compatible with plain A records to IPs on AWS. If nginx-controller puts in the private IP of the node to the Ingress status then it will still not be accessible from the outside. I'm afraid, this really is where mate's responsibility ends.

@linki
Copy link
Owner

linki commented Feb 23, 2017

I believe nginx-controller was designed to be used in circumstances where you don't have access to a cloud loadbalancer. Therefore, using nginx-controller and relying on an ELB to route traffic to your nodes kind of defeats the purpose. However, since no Amazon (A|E)LB supports hostname based routing this is a valid setup.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants