This repository has been archived by the owner on Sep 13, 2022. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 34
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Cleanup of outdated infos - Added Vagrant/kubespray based cluster demo setup - Updated kubernetes deployment files - Updated Readme to for Quobyte 1.4
- Loading branch information
Matthias Grawinkel
committed
Sep 14, 2017
1 parent
e409264
commit 3e47053
Showing
35 changed files
with
1,356 additions
and
686 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,19 +1,10 @@ | ||
Scripts and tools for deploying Quobyte installations | ||
Scripts and tools for deploying Quobyte on Kubernetes | ||
===================================================== | ||
|
||
Currently contains: | ||
* **Ansible** script for AWS deployments | ||
* **demo** Vagrant file for local demo cluster and kubespray based k8s cluster bootstrap | ||
* device initialization tools: **qbootstrap** and **qmkdev** | ||
* **Kubernetes** specification files for Quobyte on Kubernetes | ||
|
||
For **Puppet**, please refer to SysEleven's recipes: | ||
https://github.com/syseleven/puppet-quobyte | ||
|
||
Arnold from Inovex maintains a **Saltstack** formula here: | ||
https://github.com/bechtoldt/saltstack-quobyte-formula | ||
|
||
For **Mesos**, please check out our Mesos framework: | ||
https://github.com/quobyte/mesos-framework | ||
* **deploy** Specification files for Quobyte on Kubernetes | ||
|
||
For automated **Kubernetes** deployments checkout Quobyte Deployer (community tool): | ||
https://github.com/johscheuer/quobyte-kubernetes-operator |
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,60 @@ | ||
## Setting up Test environment with Vagrant | ||
|
||
For a fast demo setup, we use a Vagrant based 4-machine cluster, where each server has additional 3 disk drives attached. | ||
|
||
```bash | ||
$ cd examples/vagrant | ||
$ vagrant up | ||
$ vagrant ssh-config | ||
``` | ||
|
||
We use kubespray to bootstrap and setup the Kubernetes cluster. | ||
We provide an inventory file for the newly created cluster `demo/kubespray/inventory/vagrant`. | ||
Please make sure that the *ansible_port* and *ansible_ssh_private_key_file* match. | ||
|
||
|
||
If the 4 machines are running and you are able to connect to them like: | ||
```bash | ||
$ cd examples/vagrant | ||
$ vagrant ssh qb1 | ||
``` | ||
we're good to apply some kubespray. | ||
|
||
```bash | ||
$ cd examples/kubespray | ||
$ ./clone_kubespray | ||
$ ./ansible_cluster.sh | ||
``` | ||
|
||
Make sure that `kubectl` [is installed](https://kubernetes.io/docs/tasks/tools/install-kubectl/ "Install and Set Up kubectl") on your machine. | ||
|
||
To configure and use your newly created cluster, you can run: | ||
|
||
```bash | ||
$ mkdir -p $HOME/.kube/certs/qb | ||
$ cd examples/vagrant/ | ||
$ vagrant ssh qb1 -- -t sudo cat /etc/kubernetes/ssl/admin-qb1.pem > $HOME/.kube/certs/qb/qb-admin.pem | ||
$ vagrant ssh qb1 -- -t sudo cat /etc/kubernetes/ssl/admin-qb1-key.pem > $HOME/.kube/certs/qb/qb-admin-key.pem | ||
$ vagrant ssh qb1 -- -t sudo cat /etc/kubernetes/ssl/ca.pem > $HOME/.kube/certs/qb/qb-ca.pem | ||
|
||
$ kubectl config set-credentials qb-admin \ | ||
--certificate-authority=$HOME/.kube/certs/qb/qb-ca.pem \ | ||
--client-key=$HOME/.kube/certs/qb/qb-admin-key.pem \ | ||
--client-certificate=$HOME/.kube/certs/qb/qb-admin.pem | ||
$ kubectl config set-cluster qb --server=https://127.0.0.1:6443 \ | ||
--certificate-authority=$HOME/.kube/certs/qb/qb-ca.pem | ||
|
||
$ kubectl config set-context qb --cluster=qb --user=qb-admin | ||
$ kubectl config use-context qb | ||
``` | ||
|
||
Your cluster should be available now: | ||
|
||
```bash | ||
$ kubectl get nodes | ||
NAME STATUS AGE VERSION | ||
qb1 Ready 5m v1.7.3+coreos.0 | ||
qb2 Ready 5m v1.7.3+coreos.0 | ||
qb3 Ready 5m v1.7.3+coreos.0 | ||
qb4 Ready 5m v1.7.3+coreos.0 | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
kubespray |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
#!/bin/bash | ||
|
||
ansible-playbook kubespray/cluster.yml -i inventory/vagrant -b -v |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
#!/bin/bash | ||
|
||
ansible-playbook kubespray/reset.yml -i inventory/vagrant -b -v |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
!/bin/bash | ||
|
||
ansible-playbook kubespray/upgrade-cluster.yml -i inventory/vagrant -b -v |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
#!/bin/bash | ||
git clone https://github.com/kubernetes-incubator/kubespray.git | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,90 @@ | ||
## The access_ip variable is used to define how other nodes should access | ||
## the node. This is used in flannel to allow other flannel nodes to see | ||
## this node for example. The access_ip is really useful AWS and Google | ||
## environments where the nodes are accessed remotely by the "public" ip, | ||
## but don't know about that address themselves. | ||
#access_ip: 1.1.1.1 | ||
|
||
### LOADBALANCING AND ACCESS MODES | ||
## Enable multiaccess to configure etcd clients to access all of the etcd members directly | ||
## as the "http://hostX:port, http://hostY:port, ..." and ignore the proxy loadbalancers. | ||
## This may be the case if clients support and loadbalance multiple etcd servers natively. | ||
#etcd_multiaccess: true | ||
|
||
## External LB example config | ||
## apiserver_loadbalancer_domain_name: "elb.some.domain" | ||
#loadbalancer_apiserver: | ||
# address: 1.2.3.4 | ||
# port: 1234 | ||
|
||
## Internal loadbalancers for apiservers | ||
loadbalancer_apiserver_localhost: true | ||
|
||
## Local loadbalancer should use this port instead, if defined. | ||
## Defaults to kube_apiserver_port (6443) | ||
#nginx_kube_apiserver_port: 8443 | ||
|
||
### OTHER OPTIONAL VARIABLES | ||
## For some things, kubelet needs to load kernel modules. For example, dynamic kernel services are needed | ||
## for mounting persistent volumes into containers. These may not be loaded by preinstall kubernetes | ||
## processes. For example, ceph and rbd backed volumes. Set to true to allow kubelet to load kernel | ||
## modules. | ||
# kubelet_load_modules: false | ||
|
||
## Internal network total size. This is the prefix of the | ||
## entire network. Must be unused in your environment. | ||
#kube_network_prefix: 18 | ||
|
||
## With calico it is possible to distributed routes with border routers of the datacenter. | ||
## Warning : enabling router peering will disable calico's default behavior ('node mesh'). | ||
## The subnets of each nodes will be distributed by the datacenter router | ||
#peer_with_router: false | ||
|
||
## Upstream dns servers used by dnsmasq | ||
upstream_dns_servers: | ||
# - 10.10.1.241 | ||
# - 10.10.1.242 | ||
- 8.8.4.4 | ||
|
||
## There are some changes specific to the cloud providers | ||
## for instance we need to encapsulate packets with some network plugins | ||
## If set the possible values are either 'gce', 'aws', 'azure', 'openstack', or 'vsphere' | ||
## When openstack is used make sure to source in the openstack credentials | ||
## like you would do when using nova-client before starting the playbook. | ||
#cloud_provider: | ||
|
||
## When azure is used, you need to also set the following variables. | ||
## see docs/azure.md for details on how to get these values | ||
#azure_tenant_id: | ||
#azure_subscription_id: | ||
#azure_aad_client_id: | ||
#azure_aad_client_secret: | ||
#azure_resource_group: | ||
#azure_location: | ||
#azure_subnet_name: | ||
#azure_security_group_name: | ||
#azure_vnet_name: | ||
#azure_route_table_name: | ||
|
||
## Set these proxy values in order to update docker daemon to use proxies | ||
#http_proxy: "" | ||
#https_proxy: "" | ||
#no_proxy: "" | ||
|
||
## Uncomment this if you want to force overlay/overlay2 as docker storage driver | ||
## Please note that overlay2 is only supported on newer kernels | ||
#docker_storage_options: -s overlay2 | ||
|
||
## Default packages to install within the cluster, f.e: | ||
#kpm_packages: | ||
# - name: kube-system/grafana | ||
|
||
## Certificate Management | ||
## This setting determines whether certs are generated via scripts or whether a | ||
## cluster of Hashicorp's Vault is started to issue certificates (using etcd | ||
## as a backend). Options are "script" or "vault" | ||
cert_management: script | ||
|
||
## Please specify true if you want to perform a kernel upgrade | ||
kernel_upgrade: false | ||
|
Oops, something went wrong.