Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenShift 3.9 #47

Merged
merged 6 commits into from
May 1, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 24 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

This project shows you how to set up OpenShift Origin on AWS using Terraform. This the companion project to my article [Get up and running with OpenShift on AWS](http://www.dwmkerr.com/get-up-and-running-with-openshift-on-aws/).

![OpenShift Sample Project](./docs/openshift-sample.png)
![OpenShift Sample Project](./docs/origin_3.9_screenshot.png)

I am also adding some 'recipes' which you can use to mix in more advanced features:

Expand Down Expand Up @@ -219,14 +219,24 @@ When you run `make openshift`, all that happens is the `inventory.template.cfg`

## Choosing the OpenShift Version

Currently, OpenShift 3.9 is installed.

To change the version, just update the version identifier in this line of the [`./install-from-bastion.sh`](./install-from-bastion.sh) script:

```bash
git clone -b release-3.6 https://github.com/openshift/openshift-ansible
git clone -b release-3.9 https://github.com/openshift/openshift-ansible
```

Available versions are listed [here](https://github.com/openshift/openshift-ansible#getting-the-correct-version).


| Version | Status |
|---------|--------|
| 3.9 | Tested successfully |
| 3.7 | [Work in progress](https://github.com/dwmkerr/terraform-aws-openshift/pull/43) |
| 3.6 | Tested successfully |
| 3.5 | Tested successfully |

OpenShift 3.5 is fully tested, and has a slightly different setup. You can build 3.5 by checking out the [`release/openshift-3.5`](https://github.com/dwmkerr/terraform-aws-openshift/tree/release/openshift-3.5) branch.

## Destroying the Cluster
Expand Down Expand Up @@ -329,6 +339,18 @@ https://github.com/dwmkerr/terraform-aws-openshift/issues/40

At this stage if the AWS generated hostnames for OpenShift nodes are specified in the inventory, then this problem should disappear. If internal DNS names are used (e.g. node1.openshift.internal) then this issue will occur.

**Unable to restart service origin-master-api**

```
Failure summary:


1. Hosts: ip-10-0-1-129.ec2.internal
Play: Configure masters
Task: restart master api
Message: Unable to restart service origin-master-api: Job for origin-master-api.service failed because the control process exited with error code. See "systemctl status origin-master-api.service" and "journalctl -xe" for details.
```

## Developer Guide

This section is intended for those who want to update or modify the code.
Expand Down
Binary file added docs/origin_3.9_screenshot.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
19 changes: 14 additions & 5 deletions install-from-bastion.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,24 @@ set -x
# Elevate priviledges, retaining the environment.
sudo -E su

# Install dev tools and Ansible 2.2
# Install dev tools.
yum install -y "@Development Tools" python2-pip openssl-devel python-devel gcc libffi-devel
pip install -Iv ansible==2.3.0.0

# Clone the openshift-ansible repo, which contains the installer.
git clone -b release-3.6 https://github.com/openshift/openshift-ansible
# Get the OpenShift 3.9 installer.
pip install -I ansible==2.4.3.0
git clone -b release-3.9 https://github.com/openshift/openshift-ansible

# Get the OpenShift 3.7 installer.
# pip install -Iv ansible==2.4.1.0
# git clone -b release-3.7 https://github.com/openshift/openshift-ansible

# Get the OpenShift 3.6 installer.
# pip install -Iv ansible==2.3.0.0
# git clone -b release-3.6 https://github.com/openshift/openshift-ansible

# Run the playbook.
ANSIBLE_HOST_KEY_CHECKING=False /usr/local/bin/ansible-playbook -i ./inventory.cfg ./openshift-ansible/playbooks/byo/config.yml # uncomment for verbose! -vvv
ANSIBLE_HOST_KEY_CHECKING=False /usr/local/bin/ansible-playbook -i ./inventory.cfg ./openshift-ansible/playbooks/prerequisites.yml
ANSIBLE_HOST_KEY_CHECKING=False /usr/local/bin/ansible-playbook -i ./inventory.cfg ./openshift-ansible/playbooks/deploy_cluster.yml

# If needed, uninstall with the below:
# ansible-playbook playbooks/adhoc/uninstall.yml
9 changes: 7 additions & 2 deletions inventory.template.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
etcd
nodes

# Set variables common for all OSEv3 hosts
Expand All @@ -17,8 +18,9 @@ ansible_ssh_user=ec2-user
# If ansible_ssh_user is not root, ansible_become must be set to true
ansible_become=true

# Deploy OpenShift origin.
deployment_type=origin
# Deploy OpenShift Origin 3.9.
openshift_deployment_type=origin
openshift_release=v3.9

# We need a wildcard DNS setup for our public access to services, fortunately
# we can use the superb xip.io to get one for free.
Expand All @@ -37,6 +39,9 @@ openshift_cloudprovider_kind=aws
openshift_cloudprovider_aws_access_key=${access_key}
openshift_cloudprovider_aws_secret_key=${secret_key}

# Set the cluster_id.
openshift_clusterid=${cluster_id}

# Create the masters host group. Note that due do:
# https://github.com/dwmkerr/terraform-aws-openshift/issues/40
# We cannot use the internal DNS names (such as master.openshift.local) as there
Expand Down
2 changes: 2 additions & 0 deletions main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ module "openshift" {
subnet_cidr = "10.0.1.0/24"
key_name = "openshift"
public_key_path = "${var.public_key_path}"
cluster_name = "openshift-cluster"
cluster_id = "openshift-cluster-${var.region}"
}

// Output some useful variables for quick SSH access etc.
Expand Down
9 changes: 9 additions & 0 deletions modules/openshift/00-variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -26,3 +26,12 @@ variable "key_name" {
variable "public_key_path" {
description = "The local public key path, e.g. ~/.ssh/id_rsa.pub"
}

variable "cluster_name" {
description = "Name of the cluster, e.g: 'openshift-cluster'. Useful when running multiple clusters in the same AWS account."
}

variable "cluster_id" {
description = "ID of the cluster, e.g: 'openshift-cluster-us-east-1'. Useful when running multiple clusters in the same AWS account."
}

26 changes: 26 additions & 0 deletions modules/openshift/01-tags.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
// Wherever possible, we will use a common set of tags for resources. This
// makes it much easier to set up resource based billing, tag based access,
// resource groups and more.
//
// We are also required to set certain tags on resources to support Kubernetes
// and AWS integration, which is needed for dynamic volume provisioning.
//
// This is quite fiddly, the following resources should be useful:
//
// - Terraform: Local Values: https://www.terraform.io/docs/configuration/locals.html
// - Terraform: Default Tags for Resources in Terraform: https://github.com/hashicorp/terraform/issues/2283
// - Terraform: Variable Interpolation for Tags: https://github.com/hashicorp/terraform/issues/14516
// - OpenShift: Cluster Labelling Requirements: https://docs.openshift.org/latest/install_config/configuring_aws.html#aws-cluster-labeling

// Define our common tags.
// - Project: Purely for my own organisation, delete or change as you like!
// - KubernetesCluster: Set to <cluster_id>, required for OpenShift < 3.7
// - kubernetes.io/cluster/<name>: Set to <cluster_id>, required for OpenShift >= 3.7
// The syntax below is ugly, but needed as we are using dynamic key names.
locals {
common_tags = "${map(
"Project", "openshift",
"KubernetesCluster", "${var.cluster_name}",
"kubernetes.io/cluster/${var.cluster_name}", "${var.cluster_id}"
)}"
}
File renamed without changes.
44 changes: 28 additions & 16 deletions modules/openshift/02-vpc.tf → modules/openshift/03-vpc.tf
Original file line number Diff line number Diff line change
Expand Up @@ -3,20 +3,26 @@ resource "aws_vpc" "openshift" {
cidr_block = "${var.vpc_cidr}"
enable_dns_hostnames = true

tags {
Name = "OpenShift VPC"
Project = "openshift"
}
// Use our common tags and add a specific name.
tags = "${merge(
local.common_tags,
map(
"Name", "OpenShift VPC"
)
)}"
}

// Create an Internet Gateway for the VPC.
resource "aws_internet_gateway" "openshift" {
vpc_id = "${aws_vpc.openshift.id}"

tags {
Name = "OpenShift IGW"
Project = "openshift"
}
// Use our common tags and add a specific name.
tags = "${merge(
local.common_tags,
map(
"Name", "OpenShift IGW"
)
)}"
}

// Create a public subnet.
Expand All @@ -27,10 +33,13 @@ resource "aws_subnet" "public-subnet" {
map_public_ip_on_launch = true
depends_on = ["aws_internet_gateway.openshift"]

tags {
Name = "OpenShift Public Subnet"
Project = "openshift"
}
// Use our common tags and add a specific name.
tags = "${merge(
local.common_tags,
map(
"Name", "OpenShift Public Subnet"
)
)}"
}

// Create a route table allowing all addresses access to the IGW.
Expand All @@ -42,10 +51,13 @@ resource "aws_route_table" "public" {
gateway_id = "${aws_internet_gateway.openshift.id}"
}

tags {
Name = "OpenShift Public Route Table"
Project = "openshift"
}
// Use our common tags and add a specific name.
tags = "${merge(
local.common_tags,
map(
"Name", "OpenShift Public Route Table"
)
)}"
}

// Now associate the route table with the public subnet - giving
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,13 @@ resource "aws_security_group" "openshift-vpc" {
self = true
}

tags {
Name = "OpenShift Internal VPC"
Project = "openshift"
}
// Use our common tags and add a specific name.
tags = "${merge(
local.common_tags,
map(
"Name", "OpenShift Internal VPC"
)
)}"
}

// This security group allows public ingress to the instances for HTTP, HTTPS
Expand Down Expand Up @@ -64,10 +67,13 @@ resource "aws_security_group" "openshift-public-ingress" {
cidr_blocks = ["0.0.0.0/0"]
}

tags {
Name = "OpenShift Public Access"
Project = "openshift"
}
// Use our common tags and add a specific name.
tags = "${merge(
local.common_tags,
map(
"Name", "OpenShift Public Ingress"
)
)}"
}

// This security group allows public egress from the instances for HTTP and
Expand All @@ -93,10 +99,13 @@ resource "aws_security_group" "openshift-public-egress" {
cidr_blocks = ["0.0.0.0/0"]
}

tags {
Name = "OpenShift Public Access"
Project = "openshift"
}
// Use our common tags and add a specific name.
tags = "${merge(
local.common_tags,
map(
"Name", "OpenShift Public Egress"
)
)}"
}

// Security group which allows SSH access to a host. Used for the bastion.
Expand All @@ -113,8 +122,11 @@ resource "aws_security_group" "openshift-ssh" {
cidr_blocks = ["0.0.0.0/0"]
}

tags {
Name = "OpenShift SSH Access"
Project = "openshift"
}
// Use our common tags and add a specific name.
tags = "${merge(
local.common_tags,
map(
"Name", "OpenShift SSH Access"
)
)}"
}
File renamed without changes.
40 changes: 22 additions & 18 deletions modules/openshift/05-nodes.tf → modules/openshift/06-nodes.tf
Original file line number Diff line number Diff line change
Expand Up @@ -42,14 +42,14 @@ resource "aws_instance" "master" {
}

key_name = "${aws_key_pair.keypair.key_name}"

tags {
Name = "OpenShift Master"
Project = "openshift"
// this tag is required for dynamic EBS PVCs
// see https://github.com/kubernetes/kubernetes/issues/39178
KubernetesCluster = "openshift-${var.region}"
}
// Use our common tags and add a specific name.
tags = "${merge(
local.common_tags,
map(
"Name", "OpenShift Master"
)
)}"
}

// Create the node userdata script.
Expand Down Expand Up @@ -91,11 +91,13 @@ resource "aws_instance" "node1" {

key_name = "${aws_key_pair.keypair.key_name}"

tags {
Name = "OpenShift Node 1"
Project = "openshift"
KubernetesCluster = "openshift-${var.region}"
}
// Use our common tags and add a specific name.
tags = "${merge(
local.common_tags,
map(
"Name", "OpenShift Node 1"
)
)}"
}
resource "aws_instance" "node2" {
ami = "${data.aws_ami.rhel7_2.id}"
Expand Down Expand Up @@ -126,9 +128,11 @@ resource "aws_instance" "node2" {

key_name = "${aws_key_pair.keypair.key_name}"

tags {
Name = "OpenShift Node 2"
Project = "openshift"
KubernetesCluster = "openshift-${var.region}"
}
// Use our common tags and add a specific name.
tags = "${merge(
local.common_tags,
map(
"Name", "OpenShift Node 2"
)
)}"
}
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,11 @@ resource "aws_instance" "bastion" {

key_name = "${aws_key_pair.keypair.key_name}"

tags {
Name = "OpenShift Bastion"
Project = "openshift"
}
// Use our common tags and add a specific name.
tags = "${merge(
local.common_tags,
map(
"Name", "OpenShift Bastion"
)
)}"
}
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ data "template_file" "inventory" {
master_hostname = "${aws_instance.master.private_dns}"
node1_hostname = "${aws_instance.node1.private_dns}"
node2_hostname = "${aws_instance.node2.private_dns}"
cluster_id = "${var.cluster_id}"
}
}

Expand Down
4 changes: 1 addition & 3 deletions variables.tf
Original file line number Diff line number Diff line change
@@ -1,9 +1,7 @@
// The region we will deploy our cluster into.
variable "region" {
description = "Region to deploy the cluster into"
// The default below will be fine for many, but to make it clear for first
// time users, there's no default, so you will be prompted for a region.
// default = "us-east-1"
default = "us-east-1"
}

// The public key to use for SSH access.
Expand Down