Skip to content
This repository has been archived by the owner on Nov 30, 2021. It is now read-only.

Commit

Permalink
docs(*): migrate from Helm Classic to Helm
Browse files Browse the repository at this point in the history
This changes all documentation to use Helm as the default tool for installing Deis
Workflow. Users of Helm Classic are urged to use https://github.com/deis/workflow-migration
to migrate from Helm Classic to Helm.
  • Loading branch information
Matthew Fisher committed Nov 23, 2016
1 parent f561953 commit f9811c6
Show file tree
Hide file tree
Showing 18 changed files with 175 additions and 668 deletions.
2 changes: 1 addition & 1 deletion mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ pages:
- Configuring Object Storage: installing-workflow/configuring-object-storage.md
- Configuring Postgres: installing-workflow/configuring-postgres.md
- Configuring the Registry: installing-workflow/configuring-registry.md
- Workflow Helm Charts: installing-workflow/workflow-helm-charts.md
- Chart Provenance: installing-workflow/chart-provenance.md
- Users:
- Command Line Interface: users/cli.md
- Users and Registration: users/registration.md
Expand Down
14 changes: 1 addition & 13 deletions src/contributing/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,19 +6,7 @@ Interested in contributing to a Deis project? There are lots of ways to help.

Find a bug? Want to see a new feature? Have a request for the maintainers? Open a Github issue in the applicable repository and we’ll get the conversation started.

Our official support channels are:

- GitHub issue queues:
- [builder](https://github.com/deis/builder/issues)
- [chart](https://github.com/deis/charts/issues)
- [database](https://github.com/deis/postgres/issues)
- [helm classic](https://github.com/helm/helm-classic/issues)
- [monitor](https://github.com/deis/monitor/issues)
- [registry](https://github.com/deis/registry/issues)
- [router](https://github.com/deis/router/issues)
- [workflow](https://github.com/deis/workflow/issues)
- [workflow-cli](https://github.com/deis/workflow-cli/issues)
- [Deis #community Slack channel][slack]
Our official support channel is the [Deis #community Slack channel][slack].

Don't know what the applicable repository for an issue is? Open up in issue in [workflow][] or chat with a maintainer in the [Deis #community Slack channel][slack] and we'll make sure it gets to the right place.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,24 +1,13 @@
# Workflow Helm charts
# Chart Provenance

As of Workflow [v2.8.0](../changelogs/v2.8.0.md), Deis has released [Kubernetes Helm][helm] charts for Workflow
and for each of its [components](../understanding-workflow/components.md).

## Installation

Once [Helm][helm] is installed and its server component is running on a Kubernetes cluster, one may install Workflow with the following steps:
```
$ helm repo add deis https://charts.deis.com/workflow # add the workflow charts repo
$ helm install deis/workflow --version=v2.8.0 --namespace=deis -f <optional values file> # injects resources into your cluster
```

## Chart Provenance

Helm provides tools for establishing and verifying chart integrity. (For an overview, see the [Provenance](https://github.com/kubernetes/helm/blob/master/docs/provenance.md) doc.) All release charts from the Deis Workflow team are now signed using this mechanism.

The full `Deis, Inc. (Helm chart signing key) <[email protected]>` public key can be found [here](../security/1d6a97d0.txt), as well as the [pgp.mit.edu](http://pgp.mit.edu/pks/lookup?op=vindex&fingerprint=on&search=0x17E526B51D6A97D0) keyserver and the official Deis Keybase [account][deis-keybase]. The key's fingerprint can be cross-checked against all of these sources.

### Verifying a signed chart
## Verifying a signed chart

The public key mentioned above must exist in a local keyring before a signed chart can be verified.

Expand Down
168 changes: 11 additions & 157 deletions src/installing-workflow/configuring-object-storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Every component that relies on object storage uses two inputs for configuration:
1. Component-specific environment variables (e.g. `BUILDER_STORAGE` and `REGISTRY_STORAGE`)
2. Access credentials stored as a Kubernetes secret named `objectstorage-keyfile`

The helm classic chart for Deis Workflow can be easily configured to connect Workflow components to off-cluster object storage. Deis Workflow currently supports Google Compute Storage, Amazon S3, Azure Blob Storage and OpenStack Swift Storage.
The helm chart for Deis Workflow can be easily configured to connect Workflow components to off-cluster object storage. Deis Workflow currently supports Google Compute Storage, Amazon S3, Azure Blob Storage and OpenStack Swift Storage.

### Step 1: Create storage buckets

Expand All @@ -25,172 +25,26 @@ If you provide credentials with sufficient access to the underlying storage, Wor

If applicable, generate credentials that have create and write access to the storage buckets created in Step 1.

If you are using AWS S3 and your Kubernetes nodes are configured with appropriate IAM API keys via InstanceRoles, you do not need to create API credentials. Do, however, validate that the InstanceRole has appropriate permissions to the configured buckets!
If you are using AWS S3 and your Kubernetes nodes are configured with appropriate [IAM][aws-iam] API keys via InstanceRoles, you do not need to create API credentials. Do, however, validate that the InstanceRole has appropriate permissions to the configured buckets!

### Step 3: Fetch Workflow charts
### Step 3: Add Deis Repo

If you haven't already fetched the Helm Classic chart, do so with `helmc fetch deis/workflow-v2.8.0`
If you haven't already added the Helm repo, do so with `helm repo add deis https://charts.deis.com/workflow`

### Step 4: Configure Workflow charts
### Step 4: Configure Workflow Chart

Operators should configure object storage by either populating a set of environment variables or editing the the Helm Classic parameters file before running `helmc generate`. Both options are documented below:
Operators should configure object storage by editing the Helm values file before running `helm install`. To do so:

**Option 1:** Using environment variables

After setting a `STORAGE_TYPE` environment variable to the desired object storage type ("s3", "gcs", "azure", or "swift"), set the additional variables as required by the selected object storage:

| Storage Type | Required Variables | Notes |
| --- | --- | --- |
| s3 | `AWS_ACCESS_KEY`, `AWS_SECRET_KEY`, `AWS_REGISTRY_BUCKET`, `AWS_DATABASE_BUCKET`, `AWS_BUILDER_BUCKET`, `S3_REGION` | To use [IAM credentials][aws-iam], it is not necessary to set `AWS_ACCESS_KEY` or `AWS_SECRET_KEY`. |
| gcs | `GCS_KEY_JSON`, `GCS_REGISTRY_BUCKET`, `GCS_DATABASE_BUCKET`, `GCS_BUILDER_BUCKET` | |
| azure | `AZURE_ACCOUNT_NAME`, `AZURE_ACCOUNT_KEY`, `AZURE_REGISTRY_CONTAINER`, `AZURE_DATABASE_CONTAINER`, `AZURE_BUILDER_CONTAINER` | |
| swift | `SWIFT_USERNAME`, `SWIFT_PASSWORD`, `SWIFT_AUTHURL`, `SWIFT_AUTHVERSION`, `SWIFT_REGISTRY_CONTAINER`, `SWIFT_DATABASE_CONTAINER`, `SWIFT_BUILDER_CONTAINER` | To specify tenant set `SWIFT_TENANT` if the auth version is 2 or later. |

!!! note
These environment variables should be set **before** running `helmc generate` in Step 5.

**Option 2:** Using template file `tpl/generate_params.toml` available at `$(helmc home)/workspace/charts/workflow-v2.8.0`

* Edit Helm Classic chart by running `helmc edit workflow-v2.8.0` and look for the template file `tpl/generate_params.toml` (make sure you have the `$EDITOR` environment variable set with your favorite text editor)
* Update the `storage` parameter to reference the platform you are using, e.g. `s3`, `azure`, `gcs`, or `swift`
* Fetch the Helm values by running `helm inspect values deis/workflow | sed -n '1!p' > values.yaml`
* Update the `global/storage` parameter to reference the platform you are using, e.g. `s3`, `azure`, `gcs`, or `swift`
* Find the corresponding section for your storage type and provide appropriate values including region, bucket names, and access credentials.
* Save your changes to `tpl/generate_params.toml`.

!!! note
You do not need to base64 encode any of these values as Helm Classic will handle encoding automatically.

### Step 5: Generate manifests

Generate the Workflow chart by running `helmc generate -x manifests workflow-v2.8.0` (if you have previously run this step, make sure you add `-f` to force its regeneration).

### Step 6: Verify credentials

Helm Classic stores the object storage configuration as a Kubernetes secret.

You may check the contents of the generated file named `deis-objectstorage-secret.yaml` in the `helmc` workspace directory:
```
$ cat $(helmc home)/workspace/charts/workflow-v2.8.0/manifests/deis-objectstorage-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: objectstorage-keyfile
...
data:
accesskey: bm9wZSBub3BlCg==
secretkey: c3VwZXIgbm9wZSBub3BlIG5vcGUgbm9wZSBub3BlCg==
region: ZWFyZgo=
registry-bucket: bXlmYW5jeS1yZWdpc3RyeS1idWNrZXQK
database-bucket: bXlmYW5jeS1kYXRhYmFzZS1idWNrZXQK
builder-bucket: bXlmYW5jeS1idWlsZGVyLWJ1c2tldAo=
```

You are now ready to `helmc install workflow-v2.8.0` using your desired object storage.

## Object Storage Configuration and Credentials

During the `helmc generate` step, Helm Classic creates a Kubernetes secret in the Deis namespace named `objectstorage-keyfile`. The exact structure of the file depends on storage backend specified in `tpl/generate_params.toml`.

```
# Set the storage backend
#
# Valid values are:
# - s3: Store persistent data in AWS S3 (configure in S3 section)
# - azure: Store persistent data in Azure's object storage
# - gcs: Store persistent data in Google Cloud Storage
# - minio: Store persistent data on in-cluster Minio server
# - swift: Store persistent data in OpenStack Swift object storage cluster
storage = "minio"
```

Individual components map the master credential secret to either secret-backed environment variables or volumes. See below for the component-by-component locations.

## Component Details

### [deis/builder](https://github.com/deis/builder)

The builder looks for a `BUILDER_STORAGE` environment variable, which it then uses as a key to look up the object storage location and authentication information from the `objectstore-creds` volume.

### [deis/slugbuilder](https://github.com/deis/slugbuilder)

Slugbuilder is configured and launched by the builder component. Slugbuilder reads credential information from the standard `objectstorage-keyfile` secret.

If you are using slugbuilder as a standalone component the following configuration is important:

- `TAR_PATH` - The location of the application `.tar` archive, relative to the configured bucket for builder e.g. `home/burley-yeomanry:git-3865c987/tar`
- `PUT_PATH` - The location to upload the finished slug, relative to the configured bucket of builder e.g. `home/burley-yeomanry:git-3865c987/push`
- `CACHE_PATH` - The location to upload the cache, relative to the configured bucket of builder e.g. `home/burley-yeomanry/cache`
* Save your changes.

!!! note
These environment variables are case-sensitive.

### [deis/slugrunner](https://github.com/deis/slugrunner)

Slugrunner is configured and launched by the controller inside a Workflow cluster. If you are using slugrunner as a standalone component the following configuration is important:

- `SLUG_URL` - environment variable containing the path of the slug, relative to the builder storage location, e.g. `home/burley-yeomanry:git-3865c987/push/slug.tgz`

Slugrunner reads credential information from a `objectstorage-keyfile` secret in the current Kubernetes namespace.

### [deis/dockerbuilder](https://github.com/deis/dockerbuilder)

Dockerbuilder is configured and launched by the builder component. Dockerbuilder reads credential information from the standard `objectstorage-keyfile` secret.

If you are using dockerbuilder as a standalone component the following configuration is important:

- `TAR_PATH` - The location of the application `.tar` archive, relative to the configured bucket for builder e.g. `home/burley-yeomanry:git-3865c987/tar`

### [deis/controller](https://github.com/deis/controller)

The controller is responsible for configuring the execution environment for buildpack-based applications. Controller copies `objectstorage-keyfile` into the application namespace so slugrunner can fetch the application slug.

The controller interacts through Kubernetes APIs and does not use any environment variables for object storage configuration.

### [deis/registry](https://github.com/deis/registry)

The registry looks for a `REGISTRY_STORAGE` environment variable which it then uses as a key to look up the object storage location and authentication information.

The registry reads credential information by reading `/var/run/secrets/deis/registry/creds/objectstorage-keyfile`.

This is the file location for the `objectstorage-keyfile` secret on the Pod filesystem.

### [deis/database](https://github.com/deis/postgres)

The database looks for a `DATABASE_STORAGE` environment variable, which it then uses as a key to look up the object storage location and authentication information

Minio (`DATABASE_STORAGE=minio`):

* `AWS_ACCESS_KEY_ID` via /var/run/secrets/deis/objectstore/creds/accesskey
* `AWS_SECRET_ACCESS_KEY` via /var/run/secrets/deis/objectstore/creds/secretkey
* `AWS_DEFAULT_REGION` is the Minio default of "us-east-1"
* `BUCKET_NAME` is the on-cluster default of "dbwal"

AWS (`DATABASE_STORAGE=s3`):

* `AWS_ACCESS_KEY_ID` via /var/run/secrets/deis/objectstore/creds/accesskey
* `AWS_SECRET_ACCESS_KEY` via /var/run/secrets/deis/objectstore/creds/secretkey
* `AWS_DEFAULT_REGION` via /var/run/secrets/deis/objectstore/creds/region
* `BUCKET_NAME` via /var/run/secrets/deis/objectstore/creds/database-bucket

GCS (`DATABASE_STORAGE=gcs`):

* `GS_APPLICATION_CREDS` via /var/run/secrets/deis/objectstore/creds/key.json
* `BUCKET_NAME` via /var/run/secrets/deis/objectstore/creds/database-bucket

Azure (`DATABASE_STORAGE=azure`):

* `WABS_ACCOUNT_NAME` via /var/run/secrets/deis/objectstore/creds/accountname
* `WABS_ACCESS_KEY` via /var/run/secrets/deis/objectstore/creds/accountkey
* `BUCKET_NAME` via /var/run/secrets/deis/objectstore/creds/database-container
You do not need to base64 encode any of these values as Helm will handle encoding automatically.

Swift (`DATABASE_STORAGE=swift`):
You are now ready to run `helm install deis/workflow --namespace deis -f values.yaml` using your desired object storage.

* `SWIFT_USERNAME` via /var/run/secrets/deis/objectstore/creds/username
* `SWIFT_PASSWORD` via /var/run/secrets/deis/objectstore/creds/password
* `SWIFT_AUTHURL` via /var/run/secrets/deis/objectstore/creds/authurl
* `SWIFT_AUTHVERSION` via /var/run/secrets/deis/objectstore/creds/authversion
* `SWIFT_TENANT` via /var/run/secrets/deis/objectstore/creds/tenant
* `BUCKET_NAME` via /var/run/secrets/deis/objectstore/creds/database-container

[minio]: ../understanding-workflow/components.md#object-storage
[generate-params-toml]: https://github.com/deis/charts/blob/master/workflow-dev/tpl/generate_params.toml
[aws-iam]: http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html
34 changes: 10 additions & 24 deletions src/installing-workflow/configuring-postgres.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,30 +29,16 @@ $ psql -h <host> -p <port> -d postgres -U <"postgres" or your own username>

## Configuring Workflow

The Helm Classic chart for Deis Workflow can be easily configured to connect the Workflow controller component to an off-cluster PostgreSQL database.

* **Step 1:** If you haven't already fetched the Helm Classic chart, do so with `helmc fetch deis/workflow-v2.8.0`
* **Step 2:** Update database connection details either by setting the appropriate environment variables _or_ by modifying the template file `tpl/generate_params.toml`. Note that environment variables take precedence over settings in `tpl/generate_params.toml`.
* **1.** Using environment variables:
* Set `DATABASE_LOCATION` to `off-cluster`.
* Set `DATABASE_HOST` to the hostname or public IP of your off-cluster PostgreSQL RDBMS.
* Set `DATABASE_PORT` to the port listened to by your off-cluster PostgreSQL RDBMS-- typically `5432`.
* Set `DATABASE_NAME` to the name of the database provisioned for use by Workflow's controller component-- typically `deis`.
* Set `DATABASE_USERNAME` to the username of the database user that owns the database-- typically `deis`.
* Set `DATABASE_PASSWORD` to the password for the database user that owns the database.
* **2.** Using template file `tpl/generate_params.toml`:
* Open the Helm Classic chart with `helmc edit workflow-v2.8.0` and look for the template file `tpl/generate_params.toml`
* Update the `database_location` parameter to `off-cluster`.
* Update the values in the `[database]` configuration section to properly reflect all connection details.
* Save your changes.
* Note: Whether using environment variables or `tpl/generate_params.toml`, you do not need to (and must not) base64 encode any values, as the Helm Classic chart will automatically handle encoding as necessary.
* **Step 3:** Re-generate the Helm Classic chart by running `helmc generate -x manifests workflow-v2.8.0`
* **Step 4:** Check the generated files in your `manifests` directory. You should see:
* `deis-controller-deployment.yaml` contains relevant connection details.
* `deis-database-secret-creds.yaml` exists and contains base64 encoded database username and password.
* No other database-related Kubernetes resources are defined. i.e. none of `database-database-service-account.yaml`, `database-database-service.yaml`, or `database-database-deployment.yaml` exist.

You are now ready to `helmc install workflow-v2.8.0` [as usual][installing].
The Helm chart for Deis Workflow can be easily configured to connect the Workflow controller component to an off-cluster PostgreSQL database.

* **Step 1:** If you haven't already fetched the values, do so with `helm inspect values deis/workflow | sed -n '1!p' > values.yaml`
* **Step 2:** Update database connection details by modifying `values.yaml`:
* Update the `database_location` parameter to `off-cluster`.
* Update the values in the `[database]` configuration section to properly reflect all connection details.
* Save your changes.
* Note: you do not need to (and must not) base64 encode any values, as the Helm chart will automatically handle encoding as necessary.

You are now ready to `helm install deis/workflow --namespace deis -f values.yaml` [as usual][installing].

[database]: ../understanding-workflow/components.md#database
[object storage]: configuring-object-storage.md
Expand Down
Loading

0 comments on commit f9811c6

Please sign in to comment.