Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please allow creating scheduled jobs via Kops #618

Closed
aliakhtar opened this issue Oct 8, 2016 · 19 comments
Closed

Please allow creating scheduled jobs via Kops #618

aliakhtar opened this issue Oct 8, 2016 · 19 comments
Labels
area/documentation lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Milestone

Comments

@aliakhtar
Copy link

aliakhtar commented Oct 8, 2016

According to http://kubernetes.io/docs/user-guide/scheduled-jobs :

You need a working Kubernetes cluster at version >= 1.4, with batch/v2alpha1 API turned on by passing --runtime-config=batch/v2alpha1 while bringing up the API server (see Turn on or off an API version for your cluster for more).

Please provide a way to do that thru Kops on AWS, plzplzplz

@justinsb
Copy link
Member

justinsb commented Oct 9, 2016

I think we’ll have to expose the runtime-config flag. I actually thought I did but I exposed it as a map and this isn't a map option it looks like. I'll take a look.

But is it better also to allow you to specify specific functionality you want enabled. So you could say “scheduledJobs: true". And you would get batch/v2alpha1 enabled. But later when scheduled jobs went GA, we would stop adding the flag?

@justinsb
Copy link
Member

So at least on the first part, this should work if you build from master.

kops edit cluster, and add this to the spec:

  kubeAPIServer:
    runtimeConfig:
      batch/v2alpha1: true

If you kops edit cluster again just to check, it should look like this:

...
  etcdClusters:
  - etcdMembers:
    - name: us-east-1b
      zone: us-east-1b
    name: main
  - etcdMembers:
    - name: us-east-1b
      zone: us-east-1b
    name: events
  kubeAPIServer:
    runtimeConfig:
      batch/v2alpha1: "true"
  kubernetesVersion: v1.4.0
...

Then you'll have to force a re-read of the configuration. Easiest way is probably to do a rolling-update of the whole cluster, but the detection is a bit wonky here. (You actually only need to terminate the master instance if you'd prefer to do that)

kops rolling-update cluster --force --yes

When it comes back, if you want to you should can do

kubectl get pods --all-namespaces | grep kube-apiserver
kubectl describe --namespace=kube-system pod kube-apiserver-ip-172-20-85-46.ec2.internal | grep runtime

And you should see --runtime-config=batch/v2alpha1=true

Then I was able to do:
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master/docs/user-guide/sj.yaml

Note that your kubectl version for the client must be 1.4 (I think)

@offlinehacker
Copy link

Thanks!

@chrislovecnm
Copy link
Contributor

Moving this to documenation

@brunoalano
Copy link

Should check the issue #746 too

@shrugs
Copy link
Contributor

shrugs commented Feb 20, 2017

This is no longer an issue on master, I believe. In summary, after adding the additional properties (note the quotes around "true")

  kubeAPIServer:
    runtimeConfig:
      "batch/v2alpha1": "true"

you must do a forced rolling update (kops rolling-update cluster {cluster} --force --yes), since kops doesn't yet have a way to determine whether or not the process on the master is running with the correct arguments.

@chrislovecnm
Copy link
Contributor

Closing

@djuretic
Copy link

In my case (with AWS) after modifying the cluster I was getting this error:

Cluster.Spec.KubeAPIServer.CloudProvider: Invalid value: "": Did not match cluster CloudProvider

Solution: Add cloudProvider: aws, as shown below:

kubeAPIServer:
  cloudProvider: aws
  runtimeConfig:
    batch/v2alpha1: "true"

@chrislovecnm
Copy link
Contributor

Reopening since we need to document this

@chrislovecnm chrislovecnm reopened this May 22, 2017
@astanciu
Copy link

astanciu commented Sep 8, 2017

this seems awfully invasive (recreating the entire cluster, including ec2 instances), just to pass an extra parameter to the apiserver... :/

@chrislovecnm
Copy link
Contributor

chrislovecnm commented Sep 10, 2017

@astanciu you do not have to recreate the entire cluster. You only need to roll the masters. kops is designed to make the cluster ec2 instances immutable. As we get more support for bare metal, that will change some, not much. See rolling-update options for a specific instance group.

@Globegitter
Copy link
Contributor

Globegitter commented Sep 11, 2017

Just as an update, running on kops 1.7.0 and adding the cronjob support as mentioned above, I did not need to add the cloudProvider: aws line and it still worked for me (of course also running on aws). Good to know about just needing to recreate the masters. So for quick reference, this is what one has to run, depending on their master's instnace groups:

rolling-update cluster --force --yes --instance-group master-eu-central-1a,master-eu-central-1b,master-eu-central-1c

@cordoval
Copy link
Contributor

#618 (comment) is the right answer

@arun-gupta
Copy link
Contributor

If you are using a Kubectl 1.8 CLI with a 1.7.x cluster, then the following command would create the CronJob

kubectl create -f template/cronjob.yaml --validate=false

Otherwise you'll get the error:

error: error validating "templates/cronjob.yaml": error validating data: unknown object type schema.GroupVersionKind{Group:"batch", Version:"v2alpha1", Kind:"CronJob"}; if you choose to ignore these errors, turn validation off with --validate=false

@chrislovecnm
Copy link
Contributor

Yup. File an issue with #sig-cli

@arun-gupta
Copy link
Contributor

I don't think its a bug since 1.8 expects batch/v1beta1. --validate-false allows to force the API to a 1.7 server.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 23, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Feb 22, 2018
@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 22, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/documentation lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests