Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CronJob Documentation Fails to Create Cron Job #2325

Closed
1 of 2 tasks
pluttrell opened this issue Jan 24, 2017 · 23 comments
Closed
1 of 2 tasks

CronJob Documentation Fails to Create Cron Job #2325

pluttrell opened this issue Jan 24, 2017 · 23 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@pluttrell
Copy link

pluttrell commented Jan 24, 2017

This is a...

  • Feature Request
  • Bug Report

Problem:

The https://kubernetes.io/docs/user-guide/cron-jobs/ guide fails at the first step, which is creating the actual CronJob.

Here is the error that I'm seeing:

$ kubectl create -f cronjob.yaml
error: error validating "cronjob.yaml": error validating data: couldn't find type: v2alpha1.CronJob; if you choose to ignore these errors, turn validation off with --validate=false

For reference here is my exact cronjob.yaml, which should match exactly what is listed in the example on said page.

apiVersion: batch/v2alpha1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure

Proposed Solution:

I have tried to disable validation as referenced in the above error output, but that doesn't fix the problem. Here's that message:

$ kubectl create --validate=false -f cronjob.yaml
error: unable to recognize "cronjob.yaml": no matches for batch/, Kind=CronJob

There is a prerequisite in the doc, which states that batch/v2alpha1 must be explicitly enabled. I believe that it is as here's my output of kubectl api-versions:

$ kubectl api-versions
apps/v1alpha1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1beta1
autoscaling/v1
batch/v1
batch/v2alpha1
certificates.k8s.io/v1alpha1
extensions/v1beta1
policy/v1alpha1
rbac.authorization.k8s.io/v1alpha1
storage.k8s.io/v1beta1
v1

This might be caused by this issue, but I am not sure so I posted this bug report.

Page to Update:
https://kubernetes.io/docs/user-guide/cron-jobs/

Kubernetes Version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T07:30:54Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.7", GitCommit:"92b4f971662de9d8770f8dcd2ee01ec226a6f6c0", GitTreeState:"clean", BuildDate:"2016-12-10T04:43:42Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
@anshumanbh
Copy link

I have been having the same issue since quite some time now...

I am trying to run a cronjob on a Kubernetes cluster on Google Container Engine.

If I turn the alpha features on while creating a cluster, the Node version that spins up is 1.4.8 and if I try to upgrade that to 1.5.2 (since running a cronjob apparently requires >=1.5.0), I get the error:

(gcloud.container.clusters.upgrade) ResponseError: code=400, message=node upgrade is not allowed for cluster with enable_kubernetes_alpha = true.

so looks like Cronjob needs the alpha features turned on AND the version >=1.5.0. Both these things are not possible to have on Google Container Engine together so looks like a Catch22 situation.

When I create the cluster without the alpha features turned on, the node version shows 1.5.2 but then I can't create the cronjob because I believe the batch/v2alpha1 api does not even exist.

Also, the documentation for the scheduled jobs seem to have been removed.

So, I have a very simple question:
How can I run a scheduled/cron job on a Kubernetes cluster?

@weaseal
Copy link

weaseal commented Feb 22, 2017

@pluttrell This is working for me on kubernetes 1.5.3 . Double check your --runtime-config=batch/v2alpha1 for typos and verify it's really running with that flag via ps

@snoby
Copy link

snoby commented Apr 3, 2017

indeed I'm facing the same problem as @pluttrell
I'm running on AWS installed with kops , kubernetes version 1.5.4

@r4j4h
Copy link
Contributor

r4j4h commented Apr 21, 2017

I think this may come from a kubectl/kubernetes version mismatch.

I had a similar breakage on 1.4.7 when using kubectl v1.5.2:

With the expected type "ScheduledJob" it gave

Error from server (BadRequest): error when creating "cronjob.yml": CronJob in version "v2alpha1" cannot be handled as a ScheduledJob: no kind "CronJob" is registered for version "batch/v2alpha1"

And with type "CronJob" it gave:

error: error validating "cronjob.yml": error validating data: couldn't find type: v2alpha1.CronJob; if you choose to ignore these errors, turn validation off with --validate=false

It ended up being the first problem I've encountered from using mismatching kubectl versions.

By using the same kubectl version as the cluster it worked fine. Try using kubectl 1.4.7, since your cluster is 1.4.7?

@riturajtiwari
Copy link

I have the same problem with 1.6.2:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:33:11Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

I get the following error:

$ kubectl create -f worker-job.yml
error: error validating "worker-job.yml": error validating data: couldn't find type: v2alpha1.CronJob; if you choose to ignore these errors, turn validation off with --validate=false

@riturajtiwari
Copy link

Well, turns out this would have required turning on alpha features on GCP which means my cluster will get autodeleted in 30 days. Sucks.

@innovia
Copy link

innovia commented May 18, 2017

if you're running kops
kubernetes/kops#618

@keithlee96
Copy link

keithlee96 commented Jul 21, 2017

I still have the same issue. The only difference is that I did not have to set any flags to get the second error message.

 $ kubectl create -f ./cronjob.yaml 
error: unable to recognize "./cronjob.yaml": no matches for /, Kind=ScheduledJob 
 $ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.1", GitCommit:"1dc5c66f5dd61da08412a74221ecc79208c2165b", GitTreeState:"clean", BuildDate:"2017-07-14T02:00:46Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.1", GitCommit:"1dc5c66f5dd61da08412a74221ecc79208c2165b", GitTreeState:"clean", BuildDate:"2017-07-14T01:48:01Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

@taestone
Copy link

I have the same issue on elastic cronjob.

$ kubectl create -f elastic/es-curator.yaml
error: error validating "elastic/es-curator.yaml": error validating data: couldn't find type: v2alpha1.CronJob; if you choose to ignore these errors, turn validation off with --validate=false
$ kubectl version

Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T09:14:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.7", GitCommit:"095136c3078ccf887b9034b7ce598a0a1faff769", GitTreeState:"clean", BuildDate:"2017-07-05T16:40:42Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}

@anton-opsguru
Copy link

I have same issue in K8s 1.6.7

@EwanValentine
Copy link

Are CRON tasks usable yet? Can't get this feature working :S

@austinlyons
Copy link

You still have to be in alpha mode in GCP to use cron jobs even though they seem to be out of K8s beta as of 1.8

@austinlyons
Copy link

I was wrong, I just needed to update my api version in cronjob.yml:

old:
apiVersion: batch/v2alpha1

new:
apiVersion: batch/v1beta1

@mattdodge
Copy link

mattdodge commented Feb 28, 2018

The resolution to this seems to depend on which version of kubernetes you are running on your GKE cluster (run kubectl version).

We have a cluster running 1.7.12-gke.1 and there is no batch/v1beta1 or batch/v2alpha1 APIs registered:

$ kubectl api-versions | grep batch
batch/v1

However, in a cluster running 1.8.7-gke.1 I do see the beta batch API

$ kubectl api-versions | grep batch
batch/v1
batch/v1beta1

So the "fix" for us was to upgrade the cluster to 1.8 and then use the beta API version. This is also confirmed by the GKE documentation on CronJob that says:

Note: To use CronJobs, your cluster must be running Kubernetes version 1.8.x or later.

@ChimeraCoder
Copy link
Contributor

For anyone else who stumbles upon this issue: I ran into issues even with batch/v1beta1 on 1.9.6-gke.1. The fix was to upgrade my local kubectl client from 1.8.3 to 1.9.2.

@Gisleburt
Copy link

Gisleburt commented Jun 1, 2018

Not sure if GKE removed this since the above (they do warn they might in their documentation):

Client Version: v1.10.2
Server Version: 1.9.7-gke.1

but

CronJob in version "v1beta1" cannot be handled as a CronJob: v1beta1.CronJob

I also tried switching my client to 1.9.2 as suggested above but to no avail.

EDIT

This actually does work on GKE, the error is just extremely unhelpful. In our case we were attaching to a volume but had forgotten to say where the volume was coming from.

@huangjiasingle
Copy link

in v1.10.3 ,it's olso has the same problem.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 11, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 11, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@damyl4sure
Copy link
Contributor

Making use of apiVersion: "batch/v1beta1" worked!

@Bashorun97
Copy link

Making use of apiVersion: "batch/v1beta1" worked!

Is this suitable to use in production?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests