Skip to content
This repository has been archived by the owner on Nov 30, 2021. It is now read-only.

Commit

Permalink
ref(quickstart): swap out Vagrant with Minikube
Browse files Browse the repository at this point in the history
kube-up.sh has long been deprecated. This replaces the vagrant quickstart docs with minikube
instead as the successor for local kube development.
  • Loading branch information
Matthew Fisher committed Mar 16, 2017
1 parent ea27199 commit d5dd212
Show file tree
Hide file tree
Showing 14 changed files with 106 additions and 245 deletions.
2 changes: 1 addition & 1 deletion charts/workflow/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -193,7 +193,7 @@ router:
service_annotations:
#<example-key>: <example-value>

# Enable to pin router pod hostPort when using vagrant
# Enable to pin router pod hostPort when using minikube
host_port:
enabled: false

Expand Down
8 changes: 4 additions & 4 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,10 @@ pages:
- Boot: quickstart/provider/azure-acs/boot.md
- DNS: quickstart/provider/azure-acs/dns.md
- Install Workflow: quickstart/provider/azure-acs/install-azure-acs.md
- Vagrant:
- Boot: quickstart/provider/vagrant/boot.md
- DNS: quickstart/provider/vagrant/dns.md
- Install Workflow: quickstart/provider/vagrant/install-vagrant.md
- Minikube:
- Boot: quickstart/provider/minikube/boot.md
- DNS: quickstart/provider/minikube/dns.md
- Install Workflow: quickstart/provider/minikube/install-minikube.md
- Deploy Your First App: quickstart/deploy-an-app.md
- Understanding Workflow:
- Concepts: understanding-workflow/concepts.md
Expand Down
2 changes: 1 addition & 1 deletion src/contributing/development-environment.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ To run a Kubernetes cluster locally or elsewhere to support your development act

To facilitate deploying Docker images containing your changes to your Kubernetes cluster, you will need to make use of a Docker registry. This is a location to where you can push your custom-built images and from where your Kubernetes cluster can retrieve those same images.

If your development cluster runs locally (in Vagrant, for instance), the most efficient and economical means of achieving this is to run a Docker registry locally _as_ a Docker container.
If your development cluster runs locally (in Minikube, for instance), the most efficient and economical means of achieving this is to run a Docker registry locally _as_ a Docker container.

To facilitate this, most Deis components provide a make target to create such a registry:

Expand Down
2 changes: 1 addition & 1 deletion src/managing-workflow/configuring-dns.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ The `LoadBalancer Ingress` field typically describes an existing domain name or

## Without a Load Balancer

On some platforms (Vagrant, for instance), a load balancer is not an easy or practical thing to provision. In these cases, one can directly identify the public IP of a Kubernetes node that is hosting a router pod and use that information to configure the local `/etc/hosts` file.
On some platforms (Minikube, for instance), a load balancer is not an easy or practical thing to provision. In these cases, one can directly identify the public IP of a Kubernetes node that is hosting a router pod and use that information to configure the local `/etc/hosts` file.

Because wildcard entries do not work in a local `/etc/hosts` file, using this strategy may result in frequent editing of that file to add fully-qualified subdomains of a cluster for each application added to that cluster. Because of this a more viable option may be to utilize the [xip.io][xip] service.

Expand Down
2 changes: 1 addition & 1 deletion src/managing-workflow/configuring-load-balancers.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Configuring Load Balancers

Depending on what distribution of Kubernetes you use and where you host it, installation of Deis Workflow may automatically provision an external (to Kubernetes) load balancer or similar mechanism for directing inbound traffic from beyond the cluster to the Deis router(s). For example, [kube-aws](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html) and [Google Container Engine](https://cloud.google.com/container-engine/) both do this. On some other platforms-- Vagrant or bare metal, for instance-- this must either be accomplished manually or does not apply at all.
Depending on what distribution of Kubernetes you use and where you host it, installation of Deis Workflow may automatically provision an external (to Kubernetes) load balancer or similar mechanism for directing inbound traffic from beyond the cluster to the Deis router(s). For example, [kube-aws](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html) and [Google Container Engine](https://cloud.google.com/container-engine/) both do this. On some other platforms-- Minikube or bare metal, for instance-- this must either be accomplished manually or does not apply at all.

## Idle connection timeouts

Expand Down
2 changes: 1 addition & 1 deletion src/quickstart/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Cloud-based options:
* [Amazon Web Services](provider/aws/boot.md): uses Kubernetes upstream `kube-up.sh` to boot a cluster on AWS EC2.
* [Azure Container Service](provider/azure-acs/boot.md): uses Azure Container Service to provision Kubernetes and install Workflow.

If you would like to test on your local machine follow, our guide for [Vagrant](provider/vagrant/boot.md).
If you would like to test on your local machine follow, our guide for [Minikube](provider/minikube/boot.md).

If you have already created a Kubernetes cluster, check out the [system requirements](../installing-workflow/system-requirements.md) and then proceed to [install Deis Workflow on your own Kubernetes cluster](../installing-workflow/index.md).

Expand Down
2 changes: 1 addition & 1 deletion src/quickstart/install-cli-tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Cloud-based options:
* [Amazon Web Services](provider/aws/boot.md): uses Kubernetes upstream `kube-up.sh` to boot a cluster on AWS EC2.
* [Azure Container Service](provider/azure-acs/boot.md): provides a managed Kubernetes environment.

If you would like to test on your local machine follow our guide for [Vagrant](provider/vagrant/boot.md).
If you would like to test on your local machine follow our guide for [Minikube](provider/minikube/boot.md).


[helm-install]: https://github.com/kubernetes/helm#install
48 changes: 48 additions & 0 deletions src/quickstart/provider/minikube/boot.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# Booting Kubernetes Using Minikube

This guide will walk you through the process of installing a small development
Kubernetes cluster on your local machine using [minikube](https://github.com/kubernetes/minikube).

## Pre-requisites

* OS X
* [xhyve driver](https://github.com/kubernetes/minikube/blob/master/DRIVERS.md#xhyve-driver), [VirtualBox](https://www.virtualbox.org/wiki/Downloads) or [VMware Fusion](https://www.vmware.com/products/fusion) installation
* Linux
* [VirtualBox](https://www.virtualbox.org/wiki/Downloads) or [KVM](http://www.linux-kvm.org/) installation
* Windows
* [Hyper-V](https://github.com/kubernetes/minikube/blob/master/DRIVERS.md#hyperv-driver)
* VT-x/AMD-v virtualization must be enabled in BIOS
* The most recent version of `kubectl`. You can install kubectl following
[these steps](https://kubernetes.io/docs/user-guide/prereqs/).
* Internet connection
* You will need a decent internet connection running `minikube start` for the first time for
Minikube to pull its Docker images. It might take Minikube some time to start.

## Download and Unpack Minikube

See the installation instructions for the
[latest release of minikube](https://github.com/kubernetes/minikube/releases).

## Boot Your First Cluster

We are now ready to boot our first Kubernetes cluster using Minikube!

```
$ minikube start --disk-size=60g --memory=4096
Starting local Kubernetes cluster...
Kubectl is now configured to use the cluster.
```

Now that the cluster is up and ready, `minikube` automatically configures `kubectl` on your machine
with the appropriate authentication and endpoint information.

```
$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
KubeDNS is running at https://192.168.99.100:8443/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://192.168.99.100:8443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
```

You are now ready to [install Deis Workflow](install-minikube.md)
45 changes: 45 additions & 0 deletions src/quickstart/provider/minikube/dns.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
## Find Your Load Balancer Address

During installation, Deis Workflow specifies that Kubernetes should provision and attach a load
balancer to the router component. The router component is responsible for routing HTTP and HTTPS
requests from outside the cluster to applications that are managed by Deis Worfklow. In cloud
environments, Kubernetes provisions and attaches a load balancer for you. Since we are running in a
local environment, we need to do a little bit of extra work to send requests to the router.

First, determine the ip address allocated to the worker node.

```
$ minikube ip
192.168.99.100
```

## Prepare the Hostname

Now that you have the ip address of your virtual machine, we can use the `nip.io` DNS service to
route arbitrary hostnames to the Deis Workflow edge router. This lets us point the Workflow CLI at
your cluster without having to either use your own domain or update DNS!

To verify the Workflow API server and nip.io, construct your hostname by taking the ip address for
your load balancer and adding `nip.io`. For our example above, the address would be `192.168.99.100`.

Nip answers with the ip address no matter the hostname:

```
$ host 192.168.99.100.nip.io
192.168.99.100.nip.io has address 192.168.99.100
$ host something-random.192.168.99.100.nip.io
something-random.192.168.99.100.nip.io has address 192.168.99.100
```

By default, any HTTP traffic for the hostname `deis` will be sent to the Workflow API service. To test that everything is connected properly you may validate connectivity using `curl`:

```
$ curl http://deis.192.168.99.100.nip.io/v2/ && echo
{"detail":"Authentication credentials were not provided."}
```

You should see a failed request because we provided no credentials to the API server.

Remember the hostname, we will use it in the next step.

[next: deploy your first app](../../deploy-an-app.md)
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Install Deis Workflow on Vagrant
# Install Deis Workflow on Minikube

## Check Your Setup

Expand Down
143 changes: 0 additions & 143 deletions src/quickstart/provider/vagrant/boot.md

This file was deleted.

Loading

0 comments on commit d5dd212

Please sign in to comment.