Skip to content

Commit

Permalink
[YUNIKORN-2418] Improve clarity of Features documentation (#398)
Browse files Browse the repository at this point in the history
Closes: #398

Signed-off-by: Craig Condit <[email protected]>
  • Loading branch information
alex-stiff authored and craigcondit committed Feb 22, 2024
1 parent 4adbea4 commit 725f4a7
Showing 1 changed file with 25 additions and 24 deletions.
49 changes: 25 additions & 24 deletions docs/get_started/core_features.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,17 +27,17 @@ under the License.
The main features of YuniKorn include:

## App-aware scheduling
One of the key differences of YuniKorn is, it does app-aware scheduling. In default K8s scheduler, it simply schedules
pod by pod, without any context about user, app, queue. However, YuniKorn recognizes users, apps, queues, and it considers
a lot more factors, e.g resource, ordering etc, while making scheduling decisions. This gives us the possibility to do
fine-grained controls on resource quotas, resource fairness and priorities, which are the most important requirements
One of the key differences of YuniKorn is that it does app-aware scheduling. The default K8s scheduler simply schedules
pod by pod without any context about user, app, or queue. However, YuniKorn recognizes users, apps, and queues, and it considers
a lot more factors, e.g resource, ordering etc, while making scheduling decisions. This gives us the possibility to use
fine-grained controls on resource quotas, resource fairness, and priorities, which are the most important requirements
for a multi-tenancy computing system.

## Hierarchy Resource Queues
Hierarchy queues provide an efficient mechanism to manage cluster resources. The hierarchy of the queues can logically
map to the structure of an organization. This gives fine-grained control over resources for different tenants. The YuniKorn
UI provides a centralised view to monitor the usage of resource queues, it helps you to get the insight how the resources are
used across different tenants. What's more, By leveraging the min/max queue capacity, it can define how elastic it can be
UI provides a centralised view to monitor the usage of resource queues and helps you to gain insight into how the resources are
used across different tenants. What's more, by leveraging the min/max queue capacity, it can define how elastic it can be
in terms of the resource consumption for each tenant.

## Gang Scheduling
Expand All @@ -51,44 +51,45 @@ It is even possible to create multiple gangs of different specifications for one
See the [gang design](design/gang_scheduling.md) and the Gang Scheduling [user guide](user_guide/gang_scheduling.md) for more details.

## Job Ordering and Queuing
Applications can be properly queued in working-queues, the ordering policy determines which application can get resources first.
The policy can be various, such as simple `FIFO`, `Fair`, `StateAware` or `Priority` based. Queues can maintain the order of applications,
Applications can be properly queued in working-queues, the ordering policy determining which application can get resources first.
There are various policies such as simple `FIFO`, `Fair`, `StateAware`, or `Priority` based. Queues can maintain the order of applications,
and based on different policies, the scheduler allocates resources to jobs accordingly. The behavior is much more predictable.

What's more, when the queue max-capacity is configured, jobs and tasks can be properly queued up in the resource queue.
If the remaining capacity is not enough, they can be waiting in line until some resources are released. This simplifies
the client side operation. Unlike the default scheduler, resources are capped by namespace resource quotas,
and that is enforced by the quota-admission-controller, if the underneath namespace has no enough quota, pods cannot be
the client side operation. Unlike the default scheduler, resources are capped by namespace resource quotas which
are enforced by the quota-admission-controller. If the underlying namespace does not have enough quota, pods cannot be
created. Client side needs complex logic, e.g retry by condition, to handle such scenarios.

## Resource fairness
In a multi-tenant environment, a lot of users are sharing cluster resources. To avoid tenants from competing resources
and potential get starving. More fine-grained fairness needs to achieve fairness across users, as well as teams/organizations.
With consideration of weights or priorities, some more important applications can get high demand resources that stand over its share.
This is often associated with resource budget, a more fine-grained fairness mode can further improve the expense control.
In a multi-tenant environment, a lot of users share cluster resources. To prevent tenants from competing for resources
and potentially getting starved, more fine-grained fairness controls are needed to achieve fairness across users, as well as across teams/organizations.
With consideration of weights or priorities, more important applications can demand resources beyond their share.
This feature is often considered in relation to resource budgets, where a more fine-grained fairness mode can further improve spending efficiency.

## Resource Reservation
YuniKorn automatically does reservations for outstanding requests. If a pod could not be allocated, YuniKorn will try to
reserve it on a qualified node and tentatively allocate the pod on this reserved node (before trying rest of nodes).
This mechanism can avoid this pod gets starved by later submitted smaller, less-picky pods.
This feature is important in the batch workloads scenario because when a large amount of heterogeneous pods is submitted
to the cluster, it's very likely some pods can be starved even they are submitted much earlier.
This mechanism can prevent the pod from being starved by future smaller, less-picky pods.
This feature is important in the batch workloads scenario because when a large amount of heterogeneous pods are submitted
to the cluster, it's very likely some pods can be starved even when they are submitted much earlier.

## Preemption
YuniKorn's preemption feature allows higher-priority tasks to dynamically reallocate resources by preempting lower-priority ones, ensuring critical workloads get necessary resources in a multi-tenant Kubernetes environment. This proactive mechanism maintains system stability and fairness, integrating with Kubernetes' priority classes and YuniKorn's hierarchical queue system.
YuniKorn's preemption feature allows higher-priority tasks to dynamically reallocate resources by preempting lower-priority ones, ensuring critical workloads get necessary resources in a multi-tenant Kubernetes environment.
This proactive mechanism maintains system stability and fairness, integrating with Kubernetes' priority classes and YuniKorn's hierarchical queue system.

## Throughput
Throughput is a key criterion to measure scheduler performance. It is critical for a large scale distributed system.
If throughput is bad, applications may waste time on waiting for scheduling, and further impact service SLAs.
When the cluster gets bigger, it also means the requirement of higher throughput. The [performance evaluation based on Kube-mark](performance/evaluate_perf_function_with_kubemark.md)
Throughput is a key criterion for measuring scheduler performance. It is critical for a large scale distributed system.
If throughput is bad, applications may waste time on waiting for scheduling and further impact service SLAs.
When the cluster gets bigger, it also means there is a requirement for higher throughput. The [performance evaluation based on Kube-mark](performance/evaluate_perf_function_with_kubemark.md)
reveals some perf numbers.

## MaxApplication Enforcement
MaxApplication enforcement feature allows users to limit the number of running applications for a configured queue.
The MaxApplication enforcement feature allows users to limit the number of running applications for a configured queue.
This feature is critical in large scale batch workloads.
Without this feature, when there are a large number of concurrent jobs launched, they would compete for resources and a certain a mount of resources will be wasted, which could lead to job failure.
Without this feature, when a large number of concurrent jobs are launched, they would compete for resources, and a certain amount of resources would be wasted, which could lead to job failure.
The [Partition and Queue Configuration](user_guide/queue_config.md) provides configuration examples.

## CPU Architecture support
YuniKorn supports running on ARM as well as on AMD/Intel CPUs.
With the release of YuniKorn 1.1.0 prebuilt convenience images for both architectures are provided in the docker hub.
With the release of YuniKorn 1.1.0, prebuilt convenience images for both architectures are provided in docker hub.

0 comments on commit 725f4a7

Please sign in to comment.