Skip to content
This repository has been archived by the owner on Nov 30, 2021. It is now read-only.

Commit

Permalink
fix(managing-workflow): update logging and monitoring docs to reflect…
Browse files Browse the repository at this point in the history
… helm v2
  • Loading branch information
Matthew Fisher committed Dec 15, 2016
1 parent 94ff422 commit 233b630
Show file tree
Hide file tree
Showing 2 changed files with 25 additions and 32 deletions.
39 changes: 14 additions & 25 deletions src/managing-workflow/platform-logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,38 +11,20 @@ If the message is from the [Workflow Router](https://github.com/deis/router) we
Logger then acts as a consumer reading messages off of the NSQ logs topic storing those messages in a local Redis instance. When a user wants to retrieve log entries using the `deis logs` command we make an HTTP request from Controller to Logger which then fetches the appropriate data from Redis.

## Configuring Off Cluster Redis
Even though we provide a redis instance with the default Workflow install. It is recommended that operators use a 3rd party source like Elasticache or similar offering. This way your data is durable across upgrades or outages. If you have a 3rd party Redis installation you would like to use all you need to do is set the following values in `generate_params.toml` within your chart's tpl directory.

Even though we provide a redis instance with the default Workflow install, it is recommended that operators use a third-party source like Elasticache or similar offering. This way your data is durable across upgrades or outages. If you have a third-party Redis installation you would like to use all you need to do is set the following values in your helm chart:

* db = "0"
* host = "my.host.redis"
* port = "6379"
* password = ""

You can also provide this environment variables when you run your `helm generate` command instead of editing `generate_params.toml`.

* LOGGER_REDIS_LOCATION="off-cluster"
* DEIS_LOGGER_REDIS_DB="0"
* DEIS_LOGGER_REDIS_SERVICE_HOST="my.host.redis"
* DEIS_LOGGER_REDIS_SERVICE_PORT="6379"

The database password can also be set as a kubernetes secret using the following name: `logger-redis-creds`.

```
apiVersion: v1
kind: Secret
metadata:
name: logger-redis-creds
namespace: deis
labels:
app: deis-logger-redis
heritage: deis
annotations:
helm-keep: "true"
data:
password: your-base64-password-here
```
These can be changed by running `helm inspect values deis/workflow > values.yaml` before using
`helm install` to complete the installation. To customize the redis credentials, edit `values.yaml`
and modify the `redis` section of the file to tune these settings.

## Debugging Logger

If the `deis logs` command encounters an error it will return the following message:

```
Expand All @@ -53,6 +35,7 @@ Error: There are currently no log messages. Please check the following things:
```

## Architecture Diagram

```
┌────────┐
│ Router │ ┌────────┐ ┌─────┐
Expand Down Expand Up @@ -84,9 +67,12 @@ Error: There are currently no log messages. Please check the following things:
```

## Default Configuration

By default the Fluentd pod can be configured to talk to numerous syslog endpoints. So for example it is possible to have Fluentd send log messages to both the Logger component and [Papertrail](https://papertrailapp.com/). This allows production deployments of Deis to satisfy stringent logging requirements such as offsite backups of log data.

Configuring Fluentd to talk to multiple syslog endpoints means adding the following stanzas to the [Fluentd daemonset manifest](https://github.com/deis/charts/blob/master/workflow-v2.9.1/tpl/deis-logger-fluentd-daemon.yaml) -
Configuring Fluentd to talk to multiple syslog endpoints means modifying the Fluentd daemonset
manifest. This means you will need to fetch the chart with `helm fetch deis/workflow --untar`, then
modify `workflow/charts/fluentd/templates/logger-fluentd-daemon.yaml` with the following:

```
env:
Expand All @@ -111,5 +97,8 @@ env:
value: "5144"
```

Then run `helm install ./workflow --namespace deis` to install the modified chart.

### Customizing:

We currently support logging information to Syslog, Elastic Search, and Sumo Logic. However, we will gladly accept pull requests that add support to other locations. For more information please visit the [fluentd repository](https://github.com/deis/fluentd).
18 changes: 11 additions & 7 deletions src/managing-workflow/platform-monitoring.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,15 @@
# Platform Monitoring

## Description

We now include a monitoring stack for introspection on a running Kubernetes cluster. The stack includes 3 components:

* [Telegraf](https://docs.influxdata.com/telegraf/v0.12/) - Metrics collection daemon written by team behind InfluxDB.
* [InfluxDB](https://docs.influxdata.com/influxdb/v0.12/) - Time series database
* [Grafana](http://grafana.org/) - Graphing tool for time series data

## Architecture Diagram

```
┌────────┐
│ Router │ ┌────────┐ ┌─────┐
Expand Down Expand Up @@ -55,7 +57,8 @@ instance fails, modified dashboards are lost. Until there is a solution to persi
them separately in version control.

### Off Cluster Grafana
It is recommended that users provide their own installation for Grafana if possible. The current deployment of Grafana within Workflow is durable across pod restarts which means custom dashboards that are created after startup will not be restored when the pod comes back up. If you wish to provide your own Grafana instance you can either set the `GRAFANA_LOCATION` environment variable when your run `helm generate` or set `grafana_location` in the generate_params.toml.

It is recommended that users provide their own installation for Grafana if possible. The current deployment of Grafana within Workflow is not durable across pod restarts which means custom dashboards that are created after startup will not be restored when the pod comes back up. If you wish to provide your own Grafana instance you can set `grafana_location` in the `values.yaml` file before running `helm install`.

## InfluxDB

Expand All @@ -73,13 +76,14 @@ You can choose to not expose the Influx UI and API to the world by updating
following line - `router.deis.io/routable: "true"`.

### Off Cluster Influxdb
To use off-cluster Influx please provide the following values in either the `generate_params.toml` file or as environment variables when running `helm generate`.

* `influxdb_location=off-cluster` - `INFLUXDB_LOCATION=off-cluster`
* `url = "http://my-influxhost.com:8086"` - `INFLUXDB_SERVICE_URL="http://my-influxhost.com:8086"`
* `database = "metrics"` - `INFLUXDB_DATABASE="metrics"`
* `user = "InfluxUser"` - `INFLUXDB_USER="InfluxUser"`
* `password = "MysuperSecurePassword"` - `INFLUXDB_PASSWORD="MysuperSecurePassword"`
To use off-cluster Influx, please provide the following values in the `values.yaml` file before running `helm install`.

* `influxdb_location=off-cluster`
* `url = "http://my-influxhost.com:8086"`
* `database = "metrics"`
* `user = "InfluxUser"`
* `password = "MysuperSecurePassword"`

## Telegraf

Expand Down

0 comments on commit 233b630

Please sign in to comment.