Skip to content

Commit

Permalink
[stable/k8s-spot-termination-handler] Merge [incubator/kube-spot-term…
Browse files Browse the repository at this point in the history
…ination-notice-handler] into this chart (helm#10286)

* [stable/k8s-spot-termination-handler] Merge [incubator/kube-spot-termination-notice-handler] to this stable chart

Signed-off-by: Mikhail Zholobov <[email protected]>

* [incubator/kube-spot-termination-notice-handler] Delete the chart

It's merged to [stable/k8s-spot-termination-handler]

Signed-off-by: Mikhail Zholobov <[email protected]>

* [incubator/kube-spot-termination-notice-handler] add support for option to detach from autoscaling group

Signed-off-by: Frode Egeland <[email protected]>
Signed-off-by: Mikhail Zholobov <[email protected]>
  • Loading branch information
legal90 authored and k8s-ci-robot committed Feb 1, 2019
1 parent 0150265 commit e3f1cd2
Show file tree
Hide file tree
Showing 18 changed files with 130 additions and 312 deletions.
21 changes: 0 additions & 21 deletions incubator/kube-spot-termination-notice-handler/.helmignore

This file was deleted.

11 changes: 0 additions & 11 deletions incubator/kube-spot-termination-notice-handler/Chart.yaml

This file was deleted.

37 changes: 0 additions & 37 deletions incubator/kube-spot-termination-notice-handler/README.md

This file was deleted.

This file was deleted.

This file was deleted.

This file was deleted.

66 changes: 0 additions & 66 deletions incubator/kube-spot-termination-notice-handler/templates/rbac.yaml

This file was deleted.

This file was deleted.

48 changes: 0 additions & 48 deletions incubator/kube-spot-termination-notice-handler/values.yaml

This file was deleted.

8 changes: 4 additions & 4 deletions stable/k8s-spot-termination-handler/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
apiVersion: v1
appVersion: "0.1.0"
appVersion: "1.10.8-1"
description: The K8s Spot Termination handler handles draining AWS Spot Instances in response to termination requests.
name: k8s-spot-termination-handler
version: 0.1.0
version: 1.0.0
keywords:
- spot
- termination
home: https://github.com/pusher/k8s-spot-termination-handler
home: https://github.com/kube-aws/kube-spot-termination-notice-handler
sources:
- https://github.com/pusher/k8s-spot-termination-handler
- https://github.com/kube-aws/kube-spot-termination-notice-handler
maintainers:
- name: kierranm
email: [email protected]
43 changes: 43 additions & 0 deletions stable/k8s-spot-termination-handler/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# Kubernetes AWS EC2 Spot Termination Notice Handler

This chart installs the [k8s-spot-termination-handler](https://github.com/kube-aws/kube-spot-termination-notice-handler)
as a daemonset across the cluster nodes.

## Purpose

Spot instances on EC2 come with significant cost savings, but with the burden of instance being terminated if
the market price goes higher than the maximum price you have configured.

The termination handler watches the AWS Metadata API for termination requests and starts draining the node
so that it can be terminated safely. Optionally it can also send a message to a Slack channel informing that
a termination notice has been received.

## Installation

You should install into the `kube-system` namespace, but this is not a requirement. The following example assumes this has been chosen.

```
helm install stable/k8s-spot-termination-handler --namespace kube-system
```

## Configuration

The following table lists the configurable parameters of the k8s-spot-termination-handler chart and their default values.

Parameter | Description | Default
--- | --- | ---
`image.repository` | container image repository | `kubeaws/kube-spot-termination-notice-handler`
`image.tag` | container image tag | `1.10.8-1`
`image.pullPolicy` | container image pull policy | `IfNotPresent`
`pollInterval` | the interval in seconds between attempts to poll EC2 metadata API for termination events | `"5"`
`slackUrl` | Slack webhook URL to send messages when a termination notice is received | _not defined_
`clusterName` | if `slackUrl` is set - use this cluster name in Slack messages | _not defined_
`enableLogspout` | if `true`, enable the Logspout log capturing. Logspout should be deployed separately | `false`
`rbac.create` | if `true`, create & use RBAC resources | `true`
`serviceAccount.create` | if `true`, create a service account | `true`
`serviceAccount.name` | the name of the service account to use. If not set and `create` is `true`, a name is generated using the fullname template. | ``
`detachAsg` | if `true`, the spot termination handler will detect (standard) AutoScaling Group, and initiate detach when termination notice is detected. | `false`
`resources` | pod resource requests & limits | `{}`
`nodeSelector` | node labels for pod assignment | `{}`
`tolerations` | node taints to tolerate (requires Kubernetes >=1.6) | `[]`
`affinity` | node/pod affinities (requires Kubernetes >=1.6) | `{}`
2 changes: 1 addition & 1 deletion stable/k8s-spot-termination-handler/templates/NOTES.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
To verify that k8s-spot-termination-handler has started, run:

kubectl --namespace={{ .Release.Namespace }} get pods -l "app={{ template "k8s-spot-termination-handler.name" . }},release={{ .Release.Name }}"
kubectl --namespace={{ .Release.Namespace }} get pods -l "app={{ template "k8s-spot-termination-handler.name" . }},release={{ .Release.Name }}"
2 changes: 1 addition & 1 deletion stable/k8s-spot-termination-handler/templates/_helpers.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -40,4 +40,4 @@ Create the name of the service account to use
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{- end -}}
Loading

0 comments on commit e3f1cd2

Please sign in to comment.