-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Please allow creating scheduled jobs via Kops #618
Comments
I think we’ll have to expose the runtime-config flag. I actually thought I did but I exposed it as a map and this isn't a map option it looks like. I'll take a look. But is it better also to allow you to specify specific functionality you want enabled. So you could say “scheduledJobs: true". And you would get batch/v2alpha1 enabled. But later when scheduled jobs went GA, we would stop adding the flag? |
So at least on the first part, this should work if you build from master.
If you
Then you'll have to force a re-read of the configuration. Easiest way is probably to do a rolling-update of the whole cluster, but the detection is a bit wonky here. (You actually only need to terminate the master instance if you'd prefer to do that)
When it comes back, if you want to you should can do
And you should see Then I was able to do: Note that your |
Thanks! |
Moving this to documenation |
Should check the issue #746 too |
This is no longer an issue on master, I believe. In summary, after adding the additional properties (note the quotes around
you must do a forced rolling update ( |
Closing |
In my case (with AWS) after modifying the cluster I was getting this error:
Solution: Add
|
Reopening since we need to document this |
this seems awfully invasive (recreating the entire cluster, including ec2 instances), just to pass an extra parameter to the apiserver... :/ |
@astanciu you do not have to recreate the entire cluster. You only need to roll the masters. |
Just as an update, running on kops 1.7.0 and adding the cronjob support as mentioned above, I did not need to add the
|
#618 (comment) is the right answer |
If you are using a Kubectl 1.8 CLI with a 1.7.x cluster, then the following command would create the CronJob
Otherwise you'll get the error:
|
Yup. File an issue with #sig-cli |
I don't think its a bug since 1.8 expects |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
According to http://kubernetes.io/docs/user-guide/scheduled-jobs :
Please provide a way to do that thru Kops on AWS, plzplzplz
The text was updated successfully, but these errors were encountered: