Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Critical errors when apiserver.ServiceClusterIPRange has been modified #1747

Closed
alexisbg opened this issue Jul 25, 2017 · 5 comments
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@alexisbg
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG

Minikube version (use minikube version): v0.20.0

Environment:

  • OS (e.g. from /etc/os-release): Debian GNU/Linux 8 (jessie)
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): "virtualbox"
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): v0.20.0
  • Install tools:
  • Others: Docker version 17.06.0-ce, build 02c1d87

What happened:

$ minikube start --extra-config=apiserver.ServiceClusterIPRange=10.10.240.0/24
$
$ kubectl get svc
NAME         CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
kubernetes   10.10.240.1   <none>        443/TCP   3m
$
$ kubectl cluster-info dump   ## Extract
The Service "kube-dns" is invalid: spec.clusterIP: Invalid value: "10.0.0.10": provided IP is not in the valid range
E0725 18:06:59.172788       1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.10.240.1:443/api/v1/services?resourceVersion=0: x509: certificate is valid for 192.168.99.100, 10.0.0.1, not 10.10.240.1
E0725 18:06:59.173134       1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.10.240.1:443/api/v1/endpoints?resourceVersion=0: x509: certificate is valid for 192.168.99.100, 10.0.0.1, not 10.10.240.1
  • kube-dns service cannot start.
  • The API certificates are invalid for current IP address (10.10.240.1).

What you expected to happen:
Minikube starts successfully when apiserver.ServiceClusterIPRange option has been modified.

How to reproduce it (as minimally and precisely as possible):

$ minikube start --extra-config=apiserver.ServiceClusterIPRange=10.10.240.0/24

Anything else do we need to know:
No

@alexisbg alexisbg changed the title Critical errors when apiserver.MoServiceClusterIPRange has been modified Critical errors when apiserver.ServiceClusterIPRange has been modified Jul 25, 2017
@r2d4
Copy link
Contributor

r2d4 commented Jul 25, 2017

Well after a little digging, I found out that this is hardcoded in a few places.

First being the DNS addon service: https://github.com/kubernetes/minikube/blob/master/deploy/addons/kube-dns/kube-dns-svc.yaml#L27

In getAllIPs, which gets all the IPs to add to the generated cert

ips := []net.IP{net.ParseIP(util.DefaultServiceClusterIP)}

We actually have the DNS-ip it as a flag in localkube, which is silently ignored

DNSIP: net.ParseIP(util.DefaultDNSIP),

The only tricky part I think is picking a cluster ip for the DNS service if you only pass in a service-cluster-ip-range. The rest is making sure that everything is propagated in the right way. We might already need to template the DNS addon for #1674

Just curious, what is your use case for setting this?

@r2d4 r2d4 added kind/bug Categorizes issue or PR as related to a bug. localkube labels Jul 25, 2017
@alexisbg
Copy link
Author

We are trying to migrate existing services to GKE. But some of them are accessible only through OpenVPN.

But since it is not possible to modify the hosts file on the workstations, we use the 3 predefined IP addresses, which have already been bookmarked by users in their web browser.

And we hope that these IP addresses are identical Minikube and GKE.

Thanks for your help.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 1, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 31, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants