Skip to content

Commit

Permalink
Small changes per PR review
Browse files Browse the repository at this point in the history
  • Loading branch information
Rahul Gidwani committed Oct 13, 2022
1 parent 738f72d commit 6c13bf0
Show file tree
Hide file tree
Showing 3 changed files with 39 additions and 29 deletions.
42 changes: 22 additions & 20 deletions docs/development/extensions-contrib/k8s-jobs.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
id: k8s-jobs
id: K8s-jobs
title: "MM-less Druid in K8s"
---

Expand All @@ -22,24 +22,24 @@ title: "MM-less Druid in K8s"
~ under the License.
-->

Consider this an [EXPERIMENTAL](../experimental.md) feature mostly because it has not been tested yet on a wide variety of long-running Druid clusters.
Apache Druid Extension to enable using Kubernetes for launching and managing tasks instead of the Middle Managers. This extension allows you to launch tasks as kubernetes jobs removing the need for your middle manager.

Apache Druid Extension to enable using Kubernetes for launching and managing tasks instead of the Middle Managers. This extension allows you to launch tasks as K8s jobs removing the need for your middle manager.
Consider this an [EXPERIMENTAL](../experimental.md) feature mostly because it has not been tested yet on a wide variety of long-running Druid clusters.

## How it works

It takes the podSpec of your `Overlord` pod and creates a kubernetes job from this podSpec. Thus if you have sidecars such as splunk, hubble, istio it can optionally launch a task as a k8s job. All jobs are natively restorable, they are decopled from the druid deployment, thus restarting pods or doing upgrades has no affect on tasks in flight. They will continue to run and when the overlord comes back up it will start tracking them again.
The K8s extension takes the podSpec of your `Overlord` pod and creates a kubernetes job from this podSpec. Thus if you have sidecars such as Splunk or Istio it can optionally launch a task as a K8s job. All jobs are natively restorable, they are decoupled from the druid deployment, thus restarting pods or doing upgrades has no affect on tasks in flight. They will continue to run and when the overlord comes back up it will start tracking them again.

## Configuration

To use this extension please make sure to [include](../extensions.md#loading-extensions)`druid-kubernetes-overlord-extensions` in the extensions load list for your overlord process.

The extension uses the task queue to limit how many concurrent tasks (k8s jobs) are in flight so it is required you have a reasonable value for `druid.indexer.queue.maxSize`. Additionally set the variable `druid.indexer.runner.namespace` to the namespace in which you are running druid.
The extension uses the task queue to limit how many concurrent tasks (K8s jobs) are in flight so it is required you have a reasonable value for `druid.indexer.queue.maxSize`. Additionally set the variable `druid.indexer.runner.namespace` to the namespace in which you are running druid.

Other configurations required are:
`druid.indexer.runner.type: k8s` and `druid.indexer.task.enableTaskLevelLogPush: true`
`druid.indexer.runner.type: K8s` and `druid.indexer.task.e nableTaskLevelLogPush: true`

You can add optional labels to your k8s jobs / pods if you need them by using the following configuration:
You can add optional labels to your K8s jobs / pods if you need them by using the following configuration:
`druid.indexer.runner.labels: '{"key":"value"}'`

All other configurations you had for the middle manager tasks must be moved under the overlord with one caveat, you must specify javaOpts as an array:
Expand All @@ -52,22 +52,24 @@ Additional Configuration
### Properties
|Property|Possible Values|Description|Default|required|
|--------|---------------|-----------|-------|--------|
|`druid.indexer.runner.debugJobs`|`boolean`|Clean up k8s jobs after tasks complete.|False|No|
|`druid.indexer.runner.debugJobs`|`boolean`|Clean up K8s jobs after tasks complete.|False|No|
|`druid.indexer.runner.sidecarSupport`|`boolean`|If your overlord pod has sidecars, this will attempt to start the task with the same sidecars as the overlord pod.|False|No|
|`druid.indexer.runner.kubexitImage`|`String`|Used kubexit project to help shutdown sidecars when the main pod completes. Otherwise jobs with sidecars never terminate.|karlkfi/kubexit:latest|No|
|`druid.indexer.runner.disableClientProxy`|`boolean`|Use this if you have a global http(s) proxy and you wish to bypass it.|false|No|
|`druid.indexer.runner.maxTaskDuration`|`Duration`|Max time a task is allowed to run for before getting killed|4H|No|
|`druid.indexer.runner.taskCleanupDelay`|`Duration`|How long do jobs stay around before getting reaped from k8s|2D|No|
|`druid.indexer.runner.taskCleanupInterval`|`Duration`|How often to check for jobs to be reaped|10m|No|
|`druid.indexer.runner.k8sjobLaunchTimeout`|`Duration`|How long to wait to launch a k8s task before marking it as failed, on a resource constrained cluster it may take some time.|1H|No|
|`druid.indexer.runner.javaOptsArray`|`Duration`|java opts for the task.|-Xmx1g|No|
|`druid.indexer.runner.graceTerminationPeriodSeconds`|`Long`|Number of seconds you want to wait after a sigterm for container lifecycle hooks to complete. Keep at a smaller value if you want tasks to hold locks for shorter periods.|30s (k8s default)|No|
|`druid.indexer.runner.maxTaskDuration`|`Duration`|Max time a task is allowed to run for before getting killed|`PT4H`|No|
|`druid.indexer.runner.taskCleanupDelay`|`Duration`|How long do jobs stay around before getting reaped from K8s|`P2D`|No|
|`druid.indexer.runner.taskCleanupInterval`|`Duration`|How often to check for jobs to be reaped|`PT10M`|No|
|`druid.indexer.runner.K8sjobLaunchTimeout`|`Duration`|How long to wait to launch a K8s task before marking it as failed, on a resource constrained cluster it may take some time.|`PT1H`|No|
|`druid.indexer.runner.javaOptsArray`|`JsonArray`|java opts for the task.|`-Xmx1g`|No|
|`druid.indexer.runner.labels`|`JsonObject`|Addtional labels you wish to apply to peon pod|`{}`|No|
|`druid.indexer.runner.annotations`|`JsonObject`|Addtional annotations you wish to apply to peon pod|`{}`|No|
|`druid.indexer.runner.graceTerminationPeriodSeconds`|`Long`|Number of seconds you want to wait after a sigterm for container lifecycle hooks to complete. Keep at a smaller value if you want tasks to hold locks for shorter periods.|`PT30S` (K8s default)|No|

### Gotchas

- You must have in your role the abiliity to launch jobs.
- You must have in your role the ability to launch jobs.
- All Druid Pods belonging to one Druid cluster must be inside same kubernetes namespace.
- For the sidecar support to work, your entrypoint / command in docker must be explicitly defined your spec.
- For the sidecar support to work, your entry point / command in docker must be explicitly defined your spec.

You can't have something like this:
Dockerfile:
Expand All @@ -82,7 +84,7 @@ and in your sidecar specs:
```

That will not work, because we cannot decipher what your command is, the extension needs to know it explicitly.
**Even for sidecars like isito which are dynamically created by the service mesh, this needs to happen.*
**Even for sidecars like Istio which are dynamically created by the service mesh, this needs to happen.*

Instead do the following:
You can keep your Dockerfile the same but you must have a sidecar spec like so:
Expand All @@ -97,7 +99,7 @@ You can keep your Dockerfile the same but you must have a sidecar spec like so:
The following roles must also be accessible. An example spec could be:

```
apiVersion: rbac.authorization.k8s.io/v1
apiVersion: rbac.authorization.K8s.io/v1
kind: Role
metadata:
name: druid-cluster
Expand All @@ -112,7 +114,7 @@ rules:
- '*'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
apiVersion: rbac.authorization.K8s.io/v1
metadata:
name: druid-cluster
subjects:
Expand All @@ -121,5 +123,5 @@ subjects:
roleRef:
kind: Role
name: druid-cluster
apiGroup: rbac.authorization.k8s.io
apiGroup: rbac.authorization.K8s.io
```
Original file line number Diff line number Diff line change
Expand Up @@ -316,7 +316,7 @@ public void start()
KubernetesTaskRunnerConfig.toMilliseconds(k8sConfig.taskCleanupInterval),
TimeUnit.MILLISECONDS
);
log.info("Started cleanup executor for jobs older than 1 day....");
log.debug("Started cleanup executor for jobs older than 1 day....");
}


Expand Down Expand Up @@ -451,7 +451,7 @@ public void unregisterListener(String listenerId)
for (Pair<TaskRunnerListener, Executor> pair : listeners) {
if (pair.lhs != null && pair.lhs.getListenerId().equals(listenerId)) {
listeners.remove(pair);
log.info("Unregistered listener [%s]", listenerId);
log.debug("Unregistered listener [%s]", listenerId);
return;
}
}
Expand All @@ -467,7 +467,7 @@ public void registerListener(TaskRunnerListener listener, Executor executor)
}

final Pair<TaskRunnerListener, Executor> listenerPair = Pair.of(listener, executor);
log.info("Registered listener [%s]", listener.getListenerId());
log.debug("Registered listener [%s]", listener.getListenerId());
listeners.add(listenerPair);
}

Expand Down Expand Up @@ -506,12 +506,13 @@ public RunnerTaskState getRunnerTaskState(String taskId)
return null;
} else {
PeonPhase phase = PeonPhase.getPhaseFor(item);
if (PeonPhase.PENDING.equals(phase)) {
return RunnerTaskState.PENDING;
} else if (PeonPhase.RUNNING.equals(phase)) {
return RunnerTaskState.RUNNING;
} else {
return RunnerTaskState.NONE;
switch (phase) {
case PENDING:
return RunnerTaskState.PENDING;
case RUNNING:
return RunnerTaskState.RUNNING;
default:
return RunnerTaskState.NONE;
}
}
}
Expand Down
7 changes: 7 additions & 0 deletions website/.spelling
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,7 @@ InputSource
InputSources
Integer.MAX_VALUE
ioConfig
Istio
JBOD
JDBC
JDK
Expand Down Expand Up @@ -182,6 +183,7 @@ S3
SDK
SIGAR
SPNEGO
Splunk
SqlInputSource
SQLServer
SSD
Expand Down Expand Up @@ -321,11 +323,14 @@ json_object
json_paths
json_query
json_value
karlkfi
kerberos
keystore
keytool
keytab
kubernetes
kubexit
k8s
laning
lifecycle
localhost
Expand Down Expand Up @@ -373,6 +378,7 @@ pathParts
performant
plaintext
pluggable
podSpec
postgres
postgresql
pre-aggregated
Expand Down Expand Up @@ -434,6 +440,7 @@ secondaryPartitionPruning
seekable-stream
servlet
setProcessingThreadNames
sigterm
simple-client-sslcontext
sharded
sharding
Expand Down

0 comments on commit 6c13bf0

Please sign in to comment.