Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

API Access issues for ALB utilizing AWS EKS #359

Closed
Beastie71 opened this issue Apr 4, 2018 · 7 comments
Closed

API Access issues for ALB utilizing AWS EKS #359

Beastie71 opened this issue Apr 4, 2018 · 7 comments

Comments

@Beastie71
Copy link

Hi,

I was told to open this issue here. I am trying to setup the alb-ingress-controller in our AWS EKS preview. I've setup a serviceaccount, given it access, verified it can access the api endpoint, but am still getting the following error in the logs for the alb-ingress-controller:

It seems the cluster it is running with Authorization enabled (like RBAC) and there is no permissions for the ingress controller. Please check the configuration

I am attaching the serviceaccount setup and permissions yaml. The yaml for the test I ran that verfied the permissions work. The output of describes for the test pod and the alb-ingress-controller pod. The output of the curl from the test container, and the full output of the error I am seeing in the alb-ingress-controller. Please let me know if there is additional information I can provide.

test.yaml.txt
alb-ingress-controller.yaml.txt
albrbac.yaml.txt
curl.txt
error.txt
test-describe.txt
alb-ingress-controller-describe.txt

@christopherhein
Copy link
Member

I think this is a dupe of #305 still no resolution but there are other notes to try.

@christopherhein
Copy link
Member

The one thing I noticed is in your ServiceAccount your have ther name saingress and in the ClusterRoleBinding you have:

subjects:
  - kind: ServiceAccount
    name: alb-ingress-controller
    namespace: kube-system

the name should be saingress or flip the ServiceAccount to be alb-ingress-controller

@pahud
Copy link

pahud commented Apr 6, 2018

similar issue here and after fixed the rbac as @christopherhein mentioned above, it works now!

Please make sure to add another IAM inline role for the nodeInstanceRole for ELB and EC2 access like this:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ec2:*",
                "elasticloadbalancing:*"
            ],
            "Resource": "*"
        }
    ]
}

If your ingress resource has ACM arn, you will need to add ACM privileges for the nodeInstanceRole as well.

albrbac.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app: alb-ingress-controller
  name: alb-ingress-controller
rules:
  - apiGroups:
      - ""
      - extensions
    resources:
      - configmaps
      - endpoints
      - events
      - ingresses
      - ingresses/status
      - services
    verbs:
      - create
      - get
      - list
      - update
      - watch
      - patch
  - apiGroups:
      - ""
      - extensions
    resources:
      - nodes
      - pods
      - secrets
      - services
      - namespaces
    verbs:
      - get
      - list
      - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app: alb-ingress-controller
  name: alb-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: alb-ingress-controller
subjects:
  - kind: ServiceAccount
    name: saingress
    namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: alb-ingress-controller
  name: saingress
  namespace: kube-system

alb-ingress-controller.yaml

# Application Load Balancer (ALB) Ingress Controller Deployment Manifest.
# This manifest details sensible defaults for deploying an ALB Ingress Controller.
# GitHub: https://github.com/coreos/alb-ingress-controller
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: alb-ingress-controller
  name: alb-ingress-controller
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: alb-ingress-controller
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: alb-ingress-controller
    spec:
      serviceAccountName: saingress
      containers:
      - args:
        - /server
        - --default-backend-service=kube-system/default-http-backend
        env:
        - name: AWS_REGION
          value: us-west-2
        - name: CLUSTER_NAME
          value: mycluster
        - name: AWS_DEBUG
          value: "true"
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        # Repository location of the ALB Ingress Controller.
        image: quay.io/coreos/alb-ingress-controller:1.0-alpha.3
        imagePullPolicy: Always
        name: server
        resources: {}
        terminationMessagePath: /dev/termination-log
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      securityContext: {}
      terminationGracePeriodSeconds: 30

ingress-resource.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: "webapp-alb-ingress"
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
    alb.ingress.kubernetes.io/subnets: 'subnet-xxxxxxxx,subnet-xxxxxxxx'
    alb.ingress.kubernetes.io/security-groups: sg-xxxxxxxx
    alb.ingress.kubernetes.io/certificate-arn: <ACM_CERT_ARN>
  labels:
    app: webapp-service
spec:
  rules:
  - http:
      paths:
      - path: /greeting
        backend:
          serviceName: "webapp-service"
          servicePort: 80
      - path: /
        backend:
          serviceName: "caddy-service"
          servicePort: 80

@Beastie71
Copy link
Author

Beastie71 commented Apr 6, 2018

So updated the albrbac.yaml as requested:
~/D/EKS Ϯ
❯ kubectl.aws get clusterrolebinding/alb-ingress-controller -o yaml [08:56:51]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"labels":{"app":"alb-ingress-controller"},"name":"alb-ingress-controller","namespace":""},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"alb-ingress-controller"},"subjects":[{"kind":"ServiceAccount","name":"saingress","namespace":"kube-system"}]}
creationTimestamp: 2018-04-06T12:52:14Z
labels:
app: alb-ingress-controller
name: alb-ingress-controller
resourceVersion: "384821"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/alb-ingress-controller
uid: 547a6203-3999-11e8-bed3-06739ae75a66
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: alb-ingress-controller
subjects:

  • kind: ServiceAccount
    name: saingress
    namespace: kube-system

Redeployed the alb-ingress-controller

❯ kubectl.aws describe pod alb-ingress-controller-94f8686f9-vvjk2 -n kube-system [08:59:43]
Name: alb-ingress-controller-94f8686f9-vvjk2
Namespace: kube-system
Node: ip-x/x
Start Time: Fri, 06 Apr 2018 08:54:33 -0400
Labels: app=alb-ingress-controller
pod-template-hash=509424295
Annotations:
Status: Running
IP: 10.x.x.x
Controlled By: ReplicaSet/alb-ingress-controller-94f8686f9
Containers:
server:
Container ID: docker://c3296ba81d206c34de5090d29161fd1c0721e30ae3bad06b3be83890b1873f72
Image: michaell71/alb-ingress-controller:1.0-alpha.3
Image ID: docker-pullable://michaell71/alb-ingress-controller@sha256:ef261205dc5d271f199248395eeea2045a557a8b4bc84ebd4fcbac96e1c55fab
Port:
Args:
/server
--default-backend-service=kube-system/default-http-backend -v 5
--apiserver-host=https://553F4D2F40D53194F88624A2E669F3B4.sk1.us-west-2.eks.amazonaws.com:443
--configmap=aws-auth
--v=5
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 06 Apr 2018 08:57:43 -0400
Finished: Fri, 06 Apr 2018 08:57:43 -0400
Ready: False
Restart Count: 5
Environment:
AWS_REGION: us-west-2
CLUSTER_NAME: mrlkubepoc
AWS_ACCESS_KEY_ID: ASIAJHPW45A32HF23TBQ
AWS_SECRET_ACCESS_KEY: +tgAljHQo3ybAXAdRlLY9KPNNdqdSy1jEHjnlVxB
AWS_DEBUG: true
LOG_LEVEL: DEBUG
AWS_MAX_RETRIES: 20
KUBERNETES_SERVICE_HOST: 553F4D2F40D53194F88624A2E669F3B4.sk1.us-west-2.eks.amazonaws.com
KUBERNETES_SERVICE_PORT: 443
POD_NAME: alb-ingress-controller-94f8686f9-vvjk2 (v1:metadata.name)
POD_NAMESPACE: kube-system (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from saingress-token-94s4x (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
saingress-token-94s4x:
Type: Secret (a volume populated by a Secret)
SecretName: saingress-token-94s4x
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations:
Events:
Type Reason Age From Message


Normal Scheduled 5m default-scheduler Successfully assigned alb-ingress-controller-94f8686f9-vvjk2 to ip-x
Warning FailedScheduling 5m default-scheduler Binding rejected: Operation cannot be fulfilled on pods/binding "alb-ingress-controller-94f8686f9-vvjk2": pod alb-ingress-controller-94f8686f9-vvjk2 is already assigned to node "ip-x"
Normal SuccessfulMountVolume 5m kubelet, ip-x MountVolume.SetUp succeeded for volume "saingress-token-94s4x"
Normal Created 4m (x4 over 5m) kubelet, ip-x Created container
Normal Started 4m (x4 over 5m) kubelet, ip-x Started container
Normal Pulling 3m (x5 over 5m) kubelet, ip-x pulling image "michaell71/alb-ingress-controller:1.0-alpha.3"
Normal Pulled 3m (x5 over 5m) kubelet, ip-x Successfully pulled image "michaell71/alb-ingress-controller:1.0-alpha.3"
Warning BackOff 6s (x23 over 5m) kubelet, ip-x Back-off restarting failed container

And am still seeing the permissions issue.

[root@ip-10-46-206-135 ~]# docker logs aebdef
I0406 12:55:00.641890 1 launch.go:112] &{ALB Ingress Controller 1.0.0 git-00000000 git://github.com/coreos/alb-ingress-controller}
I0406 12:55:00.642101 1 launch.go:282] Creating API client for https://553F4D2F40D53194F88624A2E669F3B4.sk1.us-west-2.eks.amazonaws.com:443
I0406 12:55:00.695502 1 launch.go:295] Running in Kubernetes Cluster version v1.9+ (v1.9.2-eks.1) - git (clean) commit e6c42d312ce5a80461ce10e6543e8fe0346bf065 - platform linux/amd64
F0406 12:55:00.699436 1 launch.go:130] ✖ It seems the cluster it is running with Authorization enabled (like RBAC) and there is no permissions for the ingress controller. Please check the configuration

@jimsmith
Copy link

jimsmith commented Apr 16, 2018

@Beastie71 I hope that's a retired or non valid AWS Credentials you posted!

I followed thru from @pahud posts for this as I have RBAC and it works for me.

@Beastie71
Copy link
Author

Expired long ago, only good for an hour.

@bigkraig
Copy link

Check the examples for the RBAC and how to assign the service account to the deployment. I've tested this against EKS and it works

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants