Skip to content
This repository has been archived by the owner on Sep 13, 2022. It is now read-only.

Commit

Permalink
Update & Demo
Browse files Browse the repository at this point in the history
- Cleanup of outdated infos
- Added Vagrant/kubespray based cluster demo setup
- Updated kubernetes deployment files
- Updated Readme to for Quobyte 1.4
  • Loading branch information
Matthias Grawinkel committed Sep 14, 2017
1 parent e409264 commit 3e47053
Show file tree
Hide file tree
Showing 35 changed files with 1,356 additions and 686 deletions.
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Copyright (c) 2014, Quobyte Inc.
Copyright (c) 2017, Quobyte Inc.
All rights reserved.

Redistribution and use in source and binary forms, with or without
Expand Down
15 changes: 3 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,10 @@
Scripts and tools for deploying Quobyte installations
Scripts and tools for deploying Quobyte on Kubernetes
=====================================================

Currently contains:
* **Ansible** script for AWS deployments
* **demo** Vagrant file for local demo cluster and kubespray based k8s cluster bootstrap
* device initialization tools: **qbootstrap** and **qmkdev**
* **Kubernetes** specification files for Quobyte on Kubernetes

For **Puppet**, please refer to SysEleven's recipes:
https://github.com/syseleven/puppet-quobyte

Arnold from Inovex maintains a **Saltstack** formula here:
https://github.com/bechtoldt/saltstack-quobyte-formula

For **Mesos**, please check out our Mesos framework:
https://github.com/quobyte/mesos-framework
* **deploy** Specification files for Quobyte on Kubernetes

For automated **Kubernetes** deployments checkout Quobyte Deployer (community tool):
https://github.com/johscheuer/quobyte-kubernetes-operator
8 changes: 0 additions & 8 deletions ansible/aws/ansible.cfg

This file was deleted.

2 changes: 0 additions & 2 deletions ansible/aws/host.cfg

This file was deleted.

35 changes: 0 additions & 35 deletions ansible/aws/hosts

This file was deleted.

111 changes: 0 additions & 111 deletions ansible/aws/playbook.yml

This file was deleted.

60 changes: 60 additions & 0 deletions demo/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
## Setting up Test environment with Vagrant

For a fast demo setup, we use a Vagrant based 4-machine cluster, where each server has additional 3 disk drives attached.

```bash
$ cd examples/vagrant
$ vagrant up
$ vagrant ssh-config
```

We use kubespray to bootstrap and setup the Kubernetes cluster.
We provide an inventory file for the newly created cluster `demo/kubespray/inventory/vagrant`.
Please make sure that the *ansible_port* and *ansible_ssh_private_key_file* match.


If the 4 machines are running and you are able to connect to them like:
```bash
$ cd examples/vagrant
$ vagrant ssh qb1
```
we're good to apply some kubespray.

```bash
$ cd examples/kubespray
$ ./clone_kubespray
$ ./ansible_cluster.sh
```

Make sure that `kubectl` [is installed](https://kubernetes.io/docs/tasks/tools/install-kubectl/ "Install and Set Up kubectl") on your machine.

To configure and use your newly created cluster, you can run:

```bash
$ mkdir -p $HOME/.kube/certs/qb
$ cd examples/vagrant/
$ vagrant ssh qb1 -- -t sudo cat /etc/kubernetes/ssl/admin-qb1.pem > $HOME/.kube/certs/qb/qb-admin.pem
$ vagrant ssh qb1 -- -t sudo cat /etc/kubernetes/ssl/admin-qb1-key.pem > $HOME/.kube/certs/qb/qb-admin-key.pem
$ vagrant ssh qb1 -- -t sudo cat /etc/kubernetes/ssl/ca.pem > $HOME/.kube/certs/qb/qb-ca.pem

$ kubectl config set-credentials qb-admin \
--certificate-authority=$HOME/.kube/certs/qb/qb-ca.pem \
--client-key=$HOME/.kube/certs/qb/qb-admin-key.pem \
--client-certificate=$HOME/.kube/certs/qb/qb-admin.pem
$ kubectl config set-cluster qb --server=https://127.0.0.1:6443 \
--certificate-authority=$HOME/.kube/certs/qb/qb-ca.pem

$ kubectl config set-context qb --cluster=qb --user=qb-admin
$ kubectl config use-context qb
```

Your cluster should be available now:

```bash
$ kubectl get nodes
NAME STATUS AGE VERSION
qb1 Ready 5m v1.7.3+coreos.0
qb2 Ready 5m v1.7.3+coreos.0
qb3 Ready 5m v1.7.3+coreos.0
qb4 Ready 5m v1.7.3+coreos.0
```
1 change: 1 addition & 0 deletions demo/kubespray/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
kubespray
3 changes: 3 additions & 0 deletions demo/kubespray/ansible_cluster.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash

ansible-playbook kubespray/cluster.yml -i inventory/vagrant -b -v
3 changes: 3 additions & 0 deletions demo/kubespray/ansible_reset.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash

ansible-playbook kubespray/reset.yml -i inventory/vagrant -b -v
3 changes: 3 additions & 0 deletions demo/kubespray/ansible_upgrade.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
!/bin/bash

ansible-playbook kubespray/upgrade-cluster.yml -i inventory/vagrant -b -v
3 changes: 3 additions & 0 deletions demo/kubespray/clone_kubespray.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
git clone https://github.com/kubernetes-incubator/kubespray.git

90 changes: 90 additions & 0 deletions demo/kubespray/inventory/group_vars/all.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
## The access_ip variable is used to define how other nodes should access
## the node. This is used in flannel to allow other flannel nodes to see
## this node for example. The access_ip is really useful AWS and Google
## environments where the nodes are accessed remotely by the "public" ip,
## but don't know about that address themselves.
#access_ip: 1.1.1.1

### LOADBALANCING AND ACCESS MODES
## Enable multiaccess to configure etcd clients to access all of the etcd members directly
## as the "http://hostX:port, http://hostY:port, ..." and ignore the proxy loadbalancers.
## This may be the case if clients support and loadbalance multiple etcd servers natively.
#etcd_multiaccess: true

## External LB example config
## apiserver_loadbalancer_domain_name: "elb.some.domain"
#loadbalancer_apiserver:
# address: 1.2.3.4
# port: 1234

## Internal loadbalancers for apiservers
loadbalancer_apiserver_localhost: true

## Local loadbalancer should use this port instead, if defined.
## Defaults to kube_apiserver_port (6443)
#nginx_kube_apiserver_port: 8443

### OTHER OPTIONAL VARIABLES
## For some things, kubelet needs to load kernel modules. For example, dynamic kernel services are needed
## for mounting persistent volumes into containers. These may not be loaded by preinstall kubernetes
## processes. For example, ceph and rbd backed volumes. Set to true to allow kubelet to load kernel
## modules.
# kubelet_load_modules: false

## Internal network total size. This is the prefix of the
## entire network. Must be unused in your environment.
#kube_network_prefix: 18

## With calico it is possible to distributed routes with border routers of the datacenter.
## Warning : enabling router peering will disable calico's default behavior ('node mesh').
## The subnets of each nodes will be distributed by the datacenter router
#peer_with_router: false

## Upstream dns servers used by dnsmasq
upstream_dns_servers:
# - 10.10.1.241
# - 10.10.1.242
- 8.8.4.4

## There are some changes specific to the cloud providers
## for instance we need to encapsulate packets with some network plugins
## If set the possible values are either 'gce', 'aws', 'azure', 'openstack', or 'vsphere'
## When openstack is used make sure to source in the openstack credentials
## like you would do when using nova-client before starting the playbook.
#cloud_provider:

## When azure is used, you need to also set the following variables.
## see docs/azure.md for details on how to get these values
#azure_tenant_id:
#azure_subscription_id:
#azure_aad_client_id:
#azure_aad_client_secret:
#azure_resource_group:
#azure_location:
#azure_subnet_name:
#azure_security_group_name:
#azure_vnet_name:
#azure_route_table_name:

## Set these proxy values in order to update docker daemon to use proxies
#http_proxy: ""
#https_proxy: ""
#no_proxy: ""

## Uncomment this if you want to force overlay/overlay2 as docker storage driver
## Please note that overlay2 is only supported on newer kernels
#docker_storage_options: -s overlay2

## Default packages to install within the cluster, f.e:
#kpm_packages:
# - name: kube-system/grafana

## Certificate Management
## This setting determines whether certs are generated via scripts or whether a
## cluster of Hashicorp's Vault is started to issue certificates (using etcd
## as a backend). Options are "script" or "vault"
cert_management: script

## Please specify true if you want to perform a kernel upgrade
kernel_upgrade: false

Loading

0 comments on commit 3e47053

Please sign in to comment.