Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changing Image tags causes Error: UPGRADE FAILED: cannot patch "<name>-create-user" with kind Job: Job.batch "<name>-create-user" #21943

Closed
2 tasks done
repl-mike-roest opened this issue Mar 2, 2022 · 23 comments
Labels
area:helm-chart Airflow Helm Chart kind:bug This is a clearly a bug

Comments

@repl-mike-roest
Copy link
Contributor

Official Helm Chart version

1.4.0 (latest released)

Apache Airflow version

2.2.4 (latest released)

Kubernetes Version

1.21.5 (EKS)

Helm Chart configuration

We are using a external RDS DB server configured via secrets.

Also have specified

airflowVersion: 2.2.3
defaultAirflowTag: 2.2.3

Along with these flags As without them while deploying via codebuild the chart was never progressing as the create-user/run-db-migrations jobs were not running

createUserJob:
  useHelmHooks: false
migrateDatabaseJob:
  useHelmHooks: false

Docker Image customisations

Happens both with a transition from airflow default image 2.2.3 -> 2.2.4 and with changing our custom image between versions or from a default airflow image to our custom image

What happened

The following error was returned from the helm upgrade command

Error: UPGRADE FAILED: cannot patch "pre-production-create-user" with kind Job: Job.batch "pre-production-create-user" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"component":"create-user-job", "controller-uid":"52e67857-b3f0-414c-b176-3027c93e4a05", "job-name":"pre-production-create-user", "release":"pre-production", "tier":"airflow"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume{core.Volume{Name:"config", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(nil), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(0xc0107dfdc0), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}}, InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"create-user", Image:"434423891815.dkr.ecr.us-west-2.amazonaws.com/airflow-playground/airflow:b-23-IP2-51", Command:[]string(nil), Args:[]string{"bash", "-c", "airflow users create \"$@\"", "--", "-r", "Admin", "-u", "admin", "-e", "[email protected]", "-f", "admin", "-l", "user", "-p", "DFPGxku#V#&h{C:)qiOmta3s"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource{}, Env:[]core.EnvVar{core.EnvVar{Name:"AIRFLOW__CORE__FERNET_KEY", Value:"", ValueFrom:(*core.EnvVarSource)(0xc016c97560)}, core.EnvVar{Name:"AIRFLOW__CORE__SQL_ALCHEMY_CONN", Value:"", ValueFrom:(*core.EnvVarSource)(0xc016c97580)}, core.EnvVar{Name:"AIRFLOW_CONN_AIRFLOW_DB", Value:"", ValueFrom:(*core.EnvVarSource)(0xc016c975a0)}, core.EnvVar{Name:"AIRFLOW__WEBSERVER__SECRET_KEY", Value:"", ValueFrom:(*core.EnvVarSource)(0xc016c975c0)}, core.EnvVar{Name:"AIRFLOW__CELERY__CELERY_RESULT_BACKEND", Value:"", ValueFrom:(*core.EnvVarSource)(0xc016c97600)}, core.EnvVar{Name:"AIRFLOW__CELERY__RESULT_BACKEND", Value:"", ValueFrom:(*core.EnvVarSource)(0xc016c97620)}, core.EnvVar{Name:"AIRFLOW__CELERY__BROKER_URL", Value:"", ValueFrom:(*core.EnvVarSource)(0xc016c97640)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount{core.VolumeMount{Name:"config", ReadOnly:true, MountPath:"/opt/airflow/airflow.cfg", SubPath:"airflow.cfg", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc01a548358), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{}, ServiceAccountName:"pre-production-airflow-create-user-job", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc01771ca80), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(0xc01b38ff20), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration{}, HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable && cannot patch "pre-production-run-airflow-migrations" with kind Job: Job.batch "pre-production-run-airflow-migrations" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"component":"run-airflow-migrations", "controller-uid":"19d78edd-2df2-4f61-ba6b-01592d103327", "job-name":"pre-production-run-airflow-migrations", "release":"pre-production", "tier":"airflow"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume{core.Volume{Name:"config", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(nil), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(0xc01a11e200), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}}, InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"run-airflow-migrations", Image:"434423891815.dkr.ecr.us-west-2.amazonaws.com/airflow-playground/airflow:b-23-IP2-51", Command:[]string(nil), Args:[]string{"bash", "-c", "airflow db upgrade"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource{}, Env:[]core.EnvVar{core.EnvVar{Name:"AIRFLOW__CORE__FERNET_KEY", Value:"", ValueFrom:(*core.EnvVarSource)(0xc00cd021c0)}, core.EnvVar{Name:"AIRFLOW__CORE__SQL_ALCHEMY_CONN", Value:"", ValueFrom:(*core.EnvVarSource)(0xc00cd02200)}, core.EnvVar{Name:"AIRFLOW_CONN_AIRFLOW_DB", Value:"", ValueFrom:(*core.EnvVarSource)(0xc00cd02220)}, core.EnvVar{Name:"AIRFLOW__WEBSERVER__SECRET_KEY", Value:"", ValueFrom:(*core.EnvVarSource)(0xc00cd02260)}, core.EnvVar{Name:"AIRFLOW__CELERY__CELERY_RESULT_BACKEND", Value:"", ValueFrom:(*core.EnvVarSource)(0xc00cd02280)}, core.EnvVar{Name:"AIRFLOW__CELERY__RESULT_BACKEND", Value:"", ValueFrom:(*core.EnvVarSource)(0xc00cd022c0)}, core.EnvVar{Name:"AIRFLOW__CELERY__BROKER_URL", Value:"", ValueFrom:(*core.EnvVarSource)(0xc00cd022e0)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount{core.VolumeMount{Name:"config", ReadOnly:true, MountPath:"/opt/airflow/airflow.cfg", SubPath:"airflow.cfg", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc01bc01a08), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{}, ServiceAccountName:"pre-production-airflow-migrate-database-job", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc008e4f300), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(0xc0127b9ad0), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration{}, HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable

What you expected to happen

The helm chart to successfully upgrade and change my running images to a new version

How to reproduce

deploy helm chart with

airflowVersion: 2.2.3
defaultAirflowTag: 2.2.3

in your values.yaml
and the following command
helm upgrade --install --wait --timeout 900s pre-production apache-airflow/airflow --namespace airflow --version 1.4.0
-f values.yaml

Then run the same command after changing the image tags to 2.2.4

Anything else

Seems to happen whenever we change the image tag (even within the same release) if we're using a custom image that contains our dags changing from one tag to the other gets the same error.

Are you willing to submit PR?

  • Yes I am willing to submit a PR!

Code of Conduct

@repl-mike-roest repl-mike-roest added area:helm-chart Airflow Helm Chart kind:bug This is a clearly a bug labels Mar 2, 2022
@boring-cyborg
Copy link

boring-cyborg bot commented Mar 2, 2022

Thanks for opening your first issue here! Be sure to follow the issue template!

@repl-mike-roest
Copy link
Contributor Author

This was a sideeffect of have

createUserJob:
  useHelmHooks: false
migrateDatabaseJob:
  useHelmHooks: fals

That was required with the --wait flag which doesn't work as per #11979

@grjones
Copy link
Contributor

grjones commented Dec 8, 2022

@repl-mike-roest Curious if you ever found a permanent resolution for this. We have this in terraform, so we can't easily remove the useHelmHooks: false flags.

@mconigliaro
Copy link

mconigliaro commented Dec 8, 2022

I think #27148 might be the fix for this. It adds createUserJob.applyCustomEnv and migrateDatabaseJob.applyCustomEnv options which we should set to false. We're just waiting on a new release of the helm chart.

@grjones
Copy link
Contributor

grjones commented Dec 8, 2022

@mconigliaro Thank you kindly

@jay-olulana
Copy link

I think #27148 might be the fix for this. It adds createUserJob.applyCustomEnv and migrateDatabaseJob.applyCustomEnv options which we should set to false. We're just waiting on a new release of the helm chart.

The helm chart is released now, and this doesn't solve the problem. Updating the image-tag still fails the terraform apply with the subject error.

image

@potiuk potiuk reopened this Feb 9, 2023
potiuk added a commit to potiuk/airflow that referenced this issue Feb 9, 2023
When you upgrade/change job in K8S that has been finished and not
manually removed, this leads to "Field is immutable" error.

This is a known kubernetes issue:

kubernetes/kubernetes#89657

And there are some workarounds (manually removing the job for
example), but the only good solution is possible only in K8S 1.23+
with ttlSecondsAfterFinished set for the job, so that K8S can auto
 clean it.

This PR adds it conditionally for K8S >= 1.23

Fixes: apache#21943
@potiuk
Copy link
Member

potiuk commented Feb 9, 2023

I think #29439 should handle it long term. Seems that it is a known issue with K8S @jay-olulana kubernetes/kubernetes#89657 and it has been fixed in 1.23 by adding ttlSecondsAfterFinished.

UPDATE: The #29439 has been closed in favour of more complete: #29314

There is no automated way for you to recover, but you can do it manually if I am right:

  • have k8s 1.23+
  • apply my PR to your chart
  • nuke the chart - remove it. Since you have Terraform, that should be easy way and redeploying it should restore it.
  • alternatively remove the affected job manually using kubectl or the like
  • redeploy the chart with the fix

Once you redeploy the chart with the PR including the ttlSecondsAfterFinished - the finished job should get deleted automatically after ~5 minutes (you can also decrease the ttl before deploying it).

I would appreciate @jay-olulana if you could test some scenarios involved and confirm that my proposed fix works for you.

@jay-olulana
Copy link

Hi @potiuk, Sorry that the reply is late. But I did apply your fix after upgrading to Helm 1.8.0, but the error persists. Although I think this is a false positive.
Below are the steps I took;

  • As you know, I have airflow deployed via helm using terraform to an EKS Cluster.
    • EKS Cluster has k8s 1.23
    • Helm Chart Version for Airflow is 1.8.0
  • I added the following to my overrides.yaml
createUserJob:
  useHelmHooks: false
  applyCustomEnv: false
migrateDatabaseJob:
  useHelmHooks: false
  applyCustomEnv: false

ttlSecondsAfterFinished: 300
  • Then I nuked the chart (destroyed the entire airflow namespace on EKS)
  • Recreated airflow with an image tag. Say, v1.1.0
  • Updated the image tag with a new version, v1.1.1
  • Run terraform apply and this error pops up again;
    tf-apply-error

Good News is that my airflow pods are recreated in EKS with the new tags and are healthy.
But terraform fails the apply.
Hope this provides some insights.

@potiuk
Copy link
Member

potiuk commented Mar 16, 2023

ttlSecondsAfterFinished: 300

Did you wait 5 minutes (after the job completed) before updating the tag?

@jay-olulana
Copy link

Yes, I did (more than even). But I will run more tests this weekend and let you know if it persists.

@potiuk
Copy link
Member

potiuk commented Mar 16, 2023

Yes, I did (more than even). But I will run more tests this weekend and let you know if it persists.

Please - also you can check if the ttls is observed the job should disappear after 5 minutes, so if it is still there, maybe for some reason the version of K8S you run it has it disabled.

@jay-olulana
Copy link

Okay so I ran some tests;
I am going to provide you with more context;
This is a snapshot of our airflow namespace in our test env (using k9s):
k9s-overview

The ⬆️ , was after killing the namespace and then reapplying;

A screenshot of the jobs at recreation of the namespace;
jobs-at-recreation

Then a screenshot of the jobs after ttls;
jobs-after

Updating the tags still fails terraform. So maybe the k8s version doesn't allow this deletion of the create-user job.

@potiuk
Copy link
Member

potiuk commented Mar 20, 2023

Note that this fix has not yet been released and requires manual patching of the chart and recreating your deployment from scratch.

Is ttlSecondsAfterFinished part of your job definition @jay-olulana ? (implemented in #29314)? Can you double check it? In order to use it, you will have to either use main version of the chart or apply the change #29314 manually to your chart and only after that deploy your chart (with the changeS) from the scratch. And your k8s version would have to support ttlSecondsAfterFinished. Only then any NEXT upgrade (assuming it is done later than the ttlSecondsAfterFinished seconds after the job finishes) should work. Simply K8S would automatically delete the completed job after this time if everything aligns.

So you need to make sure that your chart contains the changes, that your job gets the spec parameter and that k8s handles it.

If those conditions are not fulfilled you can always redeploy Airflow from the scratch.

@elongl
Copy link
Contributor

elongl commented Mar 21, 2023

@potiuk Is there any particular reason the chart is not released yet? I think it'd ease on deployment.
Here are the solutions that I'm considering at the moment:

  • Use postrender on helm_release and edit the ttlSecondsAfterFinished.
  • Use wait_for_jobs on helm_release, and then kubectl delete job afterwards.
  • Run kubectl wait followed by kubectl delete job.
  • Clone and use the pre-release chart that has support for this.

@potiuk
Copy link
Member

potiuk commented Mar 21, 2023

@potiuk Is there any particular reason the chart is not released yet? I think it'd ease on deployment. Here are the solutions that I'm considering at the moment:

The image is released semi-regularly when release managers decide to relese it (I am not one for Helm Chart BTW). I think asking "reason for not releasing" is a wrongly asked question. It takes time and effort to publish the release. And it is done by volunteers when they see the time is good for it, and one issue affecting small group of users might not be enough to warrant it.

I think the right question you could ask is "what can I do to help with speeding up the release". Let me answer this question instead. I think if you confirm that the change fixes the problem by applying the changes locally and confirming it here, it might definitely increase the chances that release managers will make a decision about releasing the helm chart.

Also - as a follow up (after you confirrm it) it would immensely help if you help testing the release candidate. Subscribe to the devlist to get announcement about it and whenever we release an RC for chart, we ask people to test it and confirm that it works. I looked it up and I have not seen your help in
https://github.com/apache/airflow/issues?q=is%3Aissue+%22status+of+testing+Apache+Airflow+Helm+Chart%22+ so I think that is a great idea to get involved and help with it when we do release it.

Can we count on your help there @elongl to verify and confirm it and then later take part in testing when an RC is out? That would certainly help to speed up the release.

@elongl
Copy link
Contributor

elongl commented Mar 21, 2023

@potiuk Thanks a lot for sharing the context on the Helm releases.
Yes, I'd definitely love to help, though I'm not yet sure on how can I test the RC locally.
Is there a guide on doing so?

@potiuk
Copy link
Member

potiuk commented Mar 21, 2023

Of course - aee #21943 (comment) and the usual Helm chart things. Helm chart is just a folder you can install. You have install instructions in https://github.com/apache/airflow/tree/main/chart (INSTALL) or you can host it yourself somewhere. This is where manual patching (or using latest sources) come into play, Just check it out, patch the changes (or use lates main) and use helm installation from there.

@potiuk
Copy link
Member

potiuk commented Mar 21, 2023

And when RC is out you will also be able to install it following the RC instructions https://github.com/apache/airflow/blob/main/dev/README_RELEASE_HELM_CHART.md#verify-release-candidates-by-contributors

Those instructions are posted every time RC is out.

@elongl
Copy link
Contributor

elongl commented Mar 21, 2023

Thanks again, that really helped.
Just wanted to confirm that using the chart of the main branch did in fact fix the issue for me and is working well now.

Also, I noticed that the ttlSecondsAfterFinished: 300 is added by default (I didn't add it to my values.yaml). I think that's great 👌🏽

@elongl
Copy link
Contributor

elongl commented Mar 21, 2023

Is there an alternative to including it into my version control?
Preferably I'd like to have a URL that returns the chart which seems like something that's supported by helm_release but not by GitHub as far as I can tell.

@potiuk
Copy link
Member

potiuk commented Mar 21, 2023

No - not until we release it

@potiuk
Copy link
Member

potiuk commented Mar 21, 2023

OK. Closing since it is confirmed.

@potiuk potiuk closed this as completed Mar 21, 2023
@potiuk potiuk added this to the Airflow 2.5.3 milestone Mar 21, 2023
@potiuk
Copy link
Member

potiuk commented Mar 21, 2023

cc: @jedcunningham @ephraimbuddy @pierrejeambrun -> FYI, might be useful to determine on when to release the new chart.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:helm-chart Airflow Helm Chart kind:bug This is a clearly a bug
Projects
None yet
Development

No branches or pull requests

6 participants