You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Creating a metric like that seems to be not so easy. We need to investigate how the Kubernetes Job Controller actually manages the backoff counter. Consequently, the task size has been upgraded to XL and the priority has been decreased.
Description
We need to track the streams that can be stopped due to exhaustion of a backoff limit.
Possible solution
On every job change event get number of pods in failed state and calculate metrics based on this value.
Alternatives
Listen to pods events and store mapping
job_id
<-->failed_pods_count
in memory.Context
The backoffLimit algorithm is explained here: https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy
The text was updated successfully, but these errors were encountered: