Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nodeSelector ingress-ready: "true" caused ingress-nginx-controller can't be schedule to any node on KIND cluster #8874

Closed
auxo86 opened this issue Jul 27, 2022 · 14 comments
Labels
area/stabilization Work for increasing stabilization of the ingress-nginx codebase needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@auxo86
Copy link

auxo86 commented Jul 27, 2022

What happened:

nodeSelector ingress-ready: "true" caused ingress-nginx-controller can't be scheduled to any node on KIND cluster.

What you expected to happen:

Install NGINX Ingress controller successfully.

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

1.3.0

Kubernetes version (use kubectl version):

Kubernetes version : 1.24.2

Environment:

Cloud provider or hardware configuration:
OS (e.g. from /etc/os-release):
Install tools:

status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2022-07-27T01:56:30Z"
    message: '0/2 nodes are available: 2 node(s) didn''t match Pod''s node affinity/selector.
      preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.'
    reason: Unschedulable
    status: "False"
    type: PodScheduled
  phase: Pending
  qosClass: Burstable

How to reproduce this issue:
start a new kind cluster with 1.24.2
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml

Anything else we need to know:

Comment codeSelector ingress-ready: "true" can solve this issue.

@auxo86 auxo86 added the kind/bug Categorizes issue or PR as related to a bug. label Jul 27, 2022
@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority labels Jul 27, 2022
@longwuyuan
Copy link
Contributor

longwuyuan commented Jul 27, 2022

/triage-accepted
/area stabilization

Hi, yes, I was able to reproduce this problem.

% cat << EOF | helm template ingress-nginx charts/ingress-nginx --namespace=ingress-nginx --values - | kubectl apply -n ingress-nginx -f -                                                                                                      
controller:                                                                                                                                                                                                                                     
  config:                                                                                                                                                                                                                                       
    worker-processes: "1"                                                                                                                                                                                                                       
  updateStrategy:                                           
    type: RollingUpdate                                     
    rollingUpdate:                                          
      maxUnavailable: 1                                     
  hostPort:                                                 
    enabled: true                                           
  terminationGracePeriodSeconds: 0                          
  service:                                                  
    type: NodePort                                          
EOF                                                         

serviceaccount/ingress-nginx created                        
configmap/ingress-nginx-controller created                  
clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged                                                           
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged                                                    
role.rbac.authorization.k8s.io/ingress-nginx created        
rolebinding.rbac.authorization.k8s.io/ingress-nginx created                                                             
service/ingress-nginx-controller-admission created          
service/ingress-nginx-controller created                    
deployment.apps/ingress-nginx-controller created            
ingressclass.networking.k8s.io/nginx unchanged              
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured                          
serviceaccount/ingress-nginx-admission created              
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged                                                 
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged                                          
role.rbac.authorization.k8s.io/ingress-nginx-admission created                                                          
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created                                                   
job.batch/ingress-nginx-admission-create created            
job.batch/ingress-nginx-admission-patch created             

% k get events -n ingress-nginx -w

LAST SEEN   TYPE     REASON              OBJECT                                MESSAGE
0s          Normal   ScalingReplicaSet   deployment/ingress-nginx-controller   Scaled up replica set ingress-nginx-controller-f86d9d75d to 1
0s          Normal   SuccessfulCreate    job/ingress-nginx-admission-create    Created pod: ingress-nginx-admission-create-p9bjs
0s          Normal   SuccessfulCreate    replicaset/ingress-nginx-controller-f86d9d75d   Created pod: ingress-nginx-controller-f86d9d75d-4x4j6
0s          Normal   Scheduled           pod/ingress-nginx-controller-f86d9d75d-4x4j6    Successfully assigned ingress-nginx/ingress-nginx-controller-f86d9d75d-4x4j6 to kind-control-plane
0s          Normal   SuccessfulCreate    job/ingress-nginx-admission-patch               Created pod: ingress-nginx-admission-patch-4pdvg
0s          Normal   Scheduled           pod/ingress-nginx-admission-create-p9bjs        Successfully assigned ingress-nginx/ingress-nginx-admission-create-p9bjs to kind-control-plane
0s          Normal   Scheduled           pod/ingress-nginx-admission-patch-4pdvg         Successfully assigned ingress-nginx/ingress-nginx-admission-patch-4pdvg to kind-control-plane
0s          Warning   FailedMount         pod/ingress-nginx-controller-f86d9d75d-4x4j6    MountVolume.SetUp failed for volume "webhook-cert" : secret "ingress-nginx-admission" not found
0s          Normal    Pulled              pod/ingress-nginx-admission-create-p9bjs        Container image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660" already present on machine
0s          Normal    Created             pod/ingress-nginx-admission-create-p9bjs        Created container create
0s          Normal    Started             pod/ingress-nginx-admission-create-p9bjs        Started container create
0s          Normal    Pulled              pod/ingress-nginx-admission-patch-4pdvg         Container image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660" already present on machine
0s          Normal    Created             pod/ingress-nginx-admission-patch-4pdvg         Created container patch
0s          Normal    Started             pod/ingress-nginx-admission-patch-4pdvg         Started container patch
0s          Normal    Pulled              pod/ingress-nginx-controller-f86d9d75d-4x4j6    Container image "registry.k8s.io/ingress-nginx/controller:v1.3.0@sha256:d1707ca76d3b044ab8a28277a2466a02100ee9f58a86af1535a3edf9323ea1b5" already present on machine
0s          Normal    Created             pod/ingress-nginx-controller-f86d9d75d-4x4j6    Created container controller
0s          Normal    Started             pod/ingress-nginx-controller-f86d9d75d-4x4j6    Started container controller
0s          Normal    CREATE              configmap/ingress-nginx-controller              ConfigMap ingress-nginx/ingress-nginx-controller
0s          Normal    RELOAD              pod/ingress-nginx-controller-f86d9d75d-4x4j6    NGINX reload triggered due to a change in configuration
1s          Normal    Completed           job/ingress-nginx-admission-create              Job completed
0s          Normal    Completed           job/ingress-nginx-admission-patch               Job completed

  • Please try helm chart installation with defaults. It should work.
  • Also there this config https://github.com/kubernetes/ingress- nginx/blob/f0490cbfbf29a7a05caaac29998dde56173ac2bb/build/dev-env.sh#L100

@k8s-ci-robot k8s-ci-robot added the area/stabilization Work for increasing stabilization of the ingress-nginx codebase label Jul 27, 2022
@longwuyuan
Copy link
Contributor

/triage accepted

cc @tao12345666333 @strongjz

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jul 27, 2022
@longwuyuan
Copy link
Contributor

And if I remove that nodeSelector, it installs and runs ;

% k apply -f deploy/static/provider/kind/deploy.yaml 
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
m@mypc [~/Documents/github/longwuyuan/ingress-nginx] issue-8874
% k -n ingress-nginx get po -w
NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-p9bjs       0/1     Completed   0          16s
ingress-nginx-admission-patch-4pdvg        0/1     Completed   0          16s
ingress-nginx-controller-f86d9d75d-4x4j6   0/1     Running     0          16s
ingress-nginx-controller-f86d9d75d-4x4j6   1/1     Running     0          21s

% k get events -n ingress-nginx -w

LAST SEEN   TYPE     REASON              OBJECT                                MESSAGE
0s          Normal   ScalingReplicaSet   deployment/ingress-nginx-controller   Scaled up replica set ingress-nginx-controller-f86d9d75d to 1
0s          Normal   SuccessfulCreate    job/ingress-nginx-admission-create    Created pod: ingress-nginx-admission-create-p9bjs
0s          Normal   SuccessfulCreate    replicaset/ingress-nginx-controller-f86d9d75d   Created pod: ingress-nginx-controller-f86d9d75d-4x4j6
0s          Normal   Scheduled           pod/ingress-nginx-controller-f86d9d75d-4x4j6    Successfully assigned ingress-nginx/ingress-nginx-controller-f86d9d75d-4x4j6 to kind-control-plane
0s          Normal   SuccessfulCreate    job/ingress-nginx-admission-patch               Created pod: ingress-nginx-admission-patch-4pdvg
0s          Normal   Scheduled           pod/ingress-nginx-admission-create-p9bjs        Successfully assigned ingress-nginx/ingress-nginx-admission-create-p9bjs to kind-control-plane
0s          Normal   Scheduled           pod/ingress-nginx-admission-patch-4pdvg         Successfully assigned ingress-nginx/ingress-nginx-admission-patch-4pdvg to kind-control-plane
0s          Warning   FailedMount         pod/ingress-nginx-controller-f86d9d75d-4x4j6    MountVolume.SetUp failed for volume "webhook-cert" : secret "ingress-nginx-admission" not found
0s          Normal    Pulled              pod/ingress-nginx-admission-create-p9bjs        Container image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660" already present on machine
0s          Normal    Created             pod/ingress-nginx-admission-create-p9bjs        Created container create
0s          Normal    Started             pod/ingress-nginx-admission-create-p9bjs        Started container create
0s          Normal    Pulled              pod/ingress-nginx-admission-patch-4pdvg         Container image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660" already present on machine
0s          Normal    Created             pod/ingress-nginx-admission-patch-4pdvg         Created container patch
0s          Normal    Started             pod/ingress-nginx-admission-patch-4pdvg         Started container patch
0s          Normal    Pulled              pod/ingress-nginx-controller-f86d9d75d-4x4j6    Container image "registry.k8s.io/ingress-nginx/controller:v1.3.0@sha256:d1707ca76d3b044ab8a28277a2466a02100ee9f58a86af1535a3edf9323ea1b5" already present on machine
0s          Normal    Created             pod/ingress-nginx-controller-f86d9d75d-4x4j6    Created container controller
0s          Normal    Started             pod/ingress-nginx-controller-f86d9d75d-4x4j6    Started container controller
0s          Normal    CREATE              configmap/ingress-nginx-controller              ConfigMap ingress-nginx/ingress-nginx-controller
0s          Normal    RELOAD              pod/ingress-nginx-controller-f86d9d75d-4x4j6    NGINX reload triggered due to a change in configuration
1s          Normal    Completed           job/ingress-nginx-admission-create              Job completed
0s          Normal    Completed           job/ingress-nginx-admission-patch               Job completed

So we need a PR to remove that nodeSelector from the static manifest for the kind provider. And that is slightly complicated because the manifest is generated via script.

@Volatus , do you have any interest in looking at this. Thanks

@TomKeur
Copy link

TomKeur commented Aug 3, 2022

I'm not really sure that removing the nodeSelector label really solves the issue.
When checking out the docs at: https://kind.sigs.k8s.io/docs/user/ingress/#create-cluster

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP

As you can see, they add the extra label. Because otherwise no port if forwared to your Docker container (and you're not able to use the Ingress), so you will face another problem.

You'll need to setup extra port mappings to your host, so port 80 in their example is exposed.

I just faced this same issue and then saw this GitHub issue, but since I've got it running now (without removing the label) I thought I'll post an update.

@longwuyuan
Copy link
Contributor

@TomKeur I checked and yes, you are absolutely right.
While removing the nodeSelector with value "ingress-ready" from the manifest helps to complete the installation successfully and the pod becomes ready, the traffic still does not reach the ingress-controller pod, because the node is actually a container and there is no routing from the container tcp/ip stack to the pod running inside it.

So now it looks like we need to improve docs.

@auxo86
Copy link
Author

auxo86 commented Aug 4, 2022

@TomKeur Many Thanks for your help.
@longwuyuan Therefore, I add a new haproxy container for routing my http traffic to all the nodes and using DaemonSet instead of Deployment on ingress-controllers.

@longwuyuan
Copy link
Contributor

@auxo86 , that is not related to ingress-nginx-controller.

First aspect ;

  • There is no node with label "ingress-ready" so installing the yaml from kind provider fails
  • Removing that affinity solves above problem of installation. You can test that by downloading yaml and removing that nodeSelector line for ingress-ready. Install will be success with pod running and service created as nodePort .

Second aspect:

  • Even after successful installation from yaml, with nodePort service, you can not route traffic so if you create a workload and ingress for the workload, there will be no http response
  • This routing problem is solved by configuring kind software itself to map hostPort. The docs of kind software explain how to configure hostPort

Running a haproxy or other pod etc is out of scope of this discussion.

@longwuyuan
Copy link
Contributor

/remove-kind bug

@k8s-ci-robot k8s-ci-robot added needs-kind Indicates a PR lacks a `kind/foo` label and requires one. and removed kind/bug Categorizes issue or PR as related to a bug. labels Aug 4, 2022
@auxo86
Copy link
Author

auxo86 commented Aug 5, 2022

@longwuyuan Hmmm...should we close this issue now?

@longwuyuan
Copy link
Contributor

If your problem is solved then please close but if you think there is some improvement you can make in the documentation, then kindly help and submit a PR.

@auxo86
Copy link
Author

auxo86 commented Aug 5, 2022

@longwuyuan Maybe we should add some solution for existing kind k8s cluster?
However, it is hard to open port on exciting docker container.
I think it might be a nice method to add a loadbalancer contrainer outside kind k8s cluster, but as you said, it is really out of scope of this discussion.

@longwuyuan
Copy link
Contributor

The solution has existed for a long long long time. Its documented in kind documentation and I already pointed out how this project uses kind with a config file. Check links and messages above.

The problem here is that there is no easy and obvious documentation, in this project, to explain the use of kind cluster with a config file and that yaml manifest this prject creates. The documentation exists in the kind website and it is expected that users will be aware of that and refer the documentaion on kind website. But for new users it may not be obvious.

See this

cat <<EOF | kind create cluster --name ${KIND_CLUSTER_NAME} --image "kindest/node:${K8S_VERSION}" --config=-

I will edit the documentation on this project website to explain this.

@longwuyuan
Copy link
Contributor

@auxo86 , I think its best that users refer to the kind documentation https://kind.sigs.k8s.io/docs/user/ingress/#ingress-nginx and we don't need to change the docs in this project, because ;

  • we don't even list kind as a provider in the deploy docs of this project.
  • kind docs are clear and precise on how to use kind and this controller

/close

@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Closing this issue.

In response to this:

@auxo86 , I think its best that users refer to the kind documentation https://kind.sigs.k8s.io/docs/user/ingress/#ingress-nginx and we don't need to change the docs in this project, because ;

  • we don't even list kind as a provider in the deploy docs of this project.
  • kind docs are clear and precise on how to use kind and this controller

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/stabilization Work for increasing stabilization of the ingress-nginx codebase needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants