Skip to content

gnuthought/openshift-provision-demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OpenShift Provision Demo

OpenShift is inherently complicated. Effectively managing OpenShift is accomplished by managing its complexity.

In this repository you will find a proven approach to manage OpenShift 3 clusters from cloud infrastructure provisioning, to installation, and continuing management using ansible, terraform, and a common scalable configuration structure.

We suggest you begin by running through the Google Cloud (GCP) demo setup and then explore pushing out a new autoscaling node image, using the openshift-provision ansible module, and other execrises describe below.

While you may choose to use this repository directly to provision OpenShift clusters, we encourage you to approach it as a collection of examples and design pattern to borrow from to build out your own tooling. All code in this repository is GPLv3.

This demo includes cluster configuration, infrastructure provisioning with terraform, cluster installation with ansible, and post installation cluster configuration all in a single repository. You may wish to break these functions into multiple repositories for your own implementation.

Prerequisites

For this demo we deploy a virtual server for controlling the cluster, but there are still some requiremens needed to get the control host up and running:

ansible 2.6+ jq 1.5+ terraform 0.11+

To access the controller you will need an SSH key pair with a public key at ~/.ssh/id_rsa.pub or configure your public key using demo_controller_ansible_ssh_pubkey in config/default/vars/controller.yml.

You will need to also need an account cloud provider account with sufficient access to create the control host and configure the cluster. At this time GCP is the only supported cloud provider. Because this demo manages all aspects of cluster deployment it requires that you have owner access of a GCP project. We recommend creating an isolated GCP project dedicated for this purpose then create a service account with owner access to the project and set the environment variable GOOGLE_APPLICATION_CREDENTIALS with the path to your GCP credentials file.

Documentation for creating a GCP service account and access key can be found at:

This demo is designed so that it can be run on a GCP free trial account. This will require activating your free trial to complete demo setup.

We include both support for OKD and OCP. If you are using the OCP demo then you will need to have an OpenShift subscription and set the following environment variables before configuring your control host:

# Credentials used to register systems with subscriptions for software
# installation and updates
export REDHAT_SUBSCRIPTION_USERNAME=...
export REDHAT_SUBSCRIPTION_PASSWORD=...
export REDHAT_SUBSCRIPTION_POOLS=...

# Credentials for retrieving images from registry.redhat.io
export OREG_AUTH_USER=...
export OREG_AUTH_PASSWORD=...

Finally, there is an environment variable used by the ansible inventory to configure whether instances will be contacted by the internal or external IP address. You will probably want to set this to use the external (NAT) IP to access your controller:

GCP_ANSIBLE_INVENTORY_USE_NAT_IP=true

There is a sample environement file at demo.env.example which can for reference when setting your environment. Just copy this over to demo.env and edit.

You may also wish to set DEMO_CLUSTER_NAME in your demo.env file such as:

export DEMO_CLUSTER_NAME=gcp-demo-sbx-okd3

Controller Setup

Initialize your cloud provider to ensure all required API features are enabled:

./init.sh gcp-demo-sbx-okd3

With your environment set, you can run the configure-controller.sh script to deploy and configure the controller instance for a cluster.

./configure-controller.sh gcp-demo-sbx-okd3

When this command exits it will print the ssh command to contact the cluster.

The cluster firewall will be configured to allow SSH access from your source IP address as seen by http://ipv4bot.whatismyipaddress.com/. If you know your source IP range does not change, you can configure demo_management_source_ip_range. Whenever your source range changes, just rerun the configure-controller.sh script.

Note that the default SSH port for the controller configured for tcp 443 using demo_controller_ansible_port in config/default/vars/controller.yml. This has proved useful for accessing the demo cluster from networks with restrictive firewalls.

To continue with the demo, SSH into the controller and start a tmux or screen session. This will allow you to disconnect and reconnect to long running ansible jobs which is crucial if your network connection is not reliable completely.

For example just login and run:

tmux

If you lose your network connection, just return later and use:

tmux attach

To disconnect from a tmux session without closing the session, use "CTRL+b, d".

Bootstrapping a Cluster

To bootstrap up your cluster, from a tmux or screen session run:

./bootstrap.sh

Reconfiguring a Cluster

To reconfigure your cluster, use:

./configure.sh

To process openshift-provision for the cluster:

./openshift-provision.sh

This is the main mechanism for routine cluster configuration management and leverages the openshift-provision ansible role.

To generate an updated node image:

./update-node-image.sh

Destroying your Cluster

To destroy your cluster run the following from the control node:

./destroy.sh

To destroy your controller, run the destroy-controller.sh script with your cluster name from the host you used to setup the controller. For example:

./destroy-controller.sh gcp-demo-sbx-okd3

Config Repository

The config directory includes an example of how to lay out a configuration repositories with an inheritance structure to make it easy to manage multiple clusters.

Dynamic Inventory

The dynamic inventory uses the following environment variables:

Variable Description

ANSIBLE_GROUP_FILTER

Restrict the dynamic inventory to only return instances belonging to a specified ansible group.

GCP_ANSIBLE_INVENTORY_USE_NAT_IP

Set to "true" to use external (NAT) IP addresses to access instances. This is normally required to access the controller unless you have a VPN configuration to access the internal address space for your cluster.

OPENSHIFT_PROVISION_CONFIG_PATH

Path to the cluster configuration directory.

OPENSHIFT_PROVISION_CLUSTER_NAME

Cluster name within the cluster configuration directory. At a minimum each cluster should have a main vars file under the configuration path at "clusters/${OPENSHIFT_PROVISION_CLUSTER_NAME}/vars/main.yaml".

OPENSHIFT_ROLE_FILTER

Restrict the ansible inventory to only return nodes marked with the specified comma separated list of roles.

Resource Hierarchy

FIXME …​

Terraform with Jinja Templates

FIXME …​

License

GPLv3

About

OpenShift Provision Demo

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •