Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ISSUE] Unable to specify single-node job compute - kind = CLASSIC_PREVIEW not supported in job compute #881

Open
kylebeni opened this issue Jan 30, 2025 · 0 comments
Labels
Triaged The issue has been reviewed. Issues without a “Triaged” label require triage/review.

Comments

@kylebeni
Copy link

Description
Hi, I am trying to use the Databricks SDK to create single-node job compute using 0.42.0.

According to the docs, you do this with a combination of is_single_node and kind in your ClusterSpec. However, when I pass is_single_node=True and kind=Kind.CLASSIC_PREVIEW (Kind options are not well documented by the way, or rather there appears to be only one option), it returns:

databricks.sdk.errors.platform.InvalidParameterValue: Invalid compute kind CLASSIC_PREVIEW not supported in job compute.

So if CLASSIC_PREVIEW is not supported in job compute, and I need to pass this value in order to use is_single_node, then therefore single-node compute appears not to be supported in the jobs API at this point.

Reproduction

from databricks.sdk import WorkspaceClient
from databricks.sdk.service import jobs
from databricks.sdk.service.compute import (
    ClusterSpec,
    DataSecurityMode,
    RuntimeEngine,
    Kind,
)

import logging

logging.basicConfig(level=logging.DEBUG)

w = WorkspaceClient()

TASKS = [
    jobs.Task(
        task_key="dummytask",
        condition_task=jobs.ConditionTask(
            left=1,
            right=1,
            op=jobs.ConditionTaskOp.EQUAL_TO,
        ),
    )
]

# Job cluster specification
JOB_CLUSTERS = [
    jobs.JobCluster(
        job_cluster_key="mycluster",
        new_cluster=ClusterSpec(
            instance_pool_id="redacted",
            policy_id="redacted",
            spark_version="14.3.x-scala2.12",
            is_single_node=True,
            kind=Kind.CLASSIC_PREVIEW,
            use_ml_runtime=True,
            runtime_engine=RuntimeEngine.STANDARD,
            data_security_mode=DataSecurityMode.SINGLE_USER,
        ),
    ),
]

response = w.jobs.create(
    job_clusters=JOB_CLUSTERS, tasks=TASKS,
)

Expected behavior
The job gets created with single-node compute.

Is it a regression?
Not that I know of... this appears to be a newer feature.

Debug Logs

DEBUG:databricks.sdk:Loaded from environment
DEBUG:databricks.sdk:Attempting to configure auth: pat
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): adb-redacted.azuredatabricks.net:443
DEBUG:urllib3.connectionpool:https://adb-redacted.azuredatabricks.net:443 "POST /api/2.1/jobs/create HTTP/1.1" 400 None
DEBUG:databricks.sdk:POST /api/2.1/jobs/create
> {
>   "job_clusters": [
>     {
>       "job_cluster_key": "mycluster",
>       "new_cluster": {
>         "data_security_mode": "SINGLE_USER",
>         "instance_pool_id": "redacted",
>         "is_single_node": true,
>         "kind": "CLASSIC_PREVIEW",
>         "policy_id": "redacted",
>         "runtime_engine": "STANDARD",
>         "spark_version": "14.3.x-scala2.12",
>         "use_ml_runtime": true
>       }
>     }
>   ],
>   "tasks": [
>     {
>       "condition_task": {
>         "left": 1,
>         "op": "EQUAL_TO",
>         "right": 1
>       },
>       "task_key": "dummytask"
>     }
>   ]
> }
< 400 Bad Request
< {
<   "error_code": "INVALID_PARAMETER_VALUE",
<   "message": "Invalid compute kind CLASSIC_PREVIEW not supported in job compute."
< }
Traceback (most recent call last):
  File "debug_single_node.py", line 44, in <module>
    response = w.jobs.create(
  File ".venv/lib/python3.10/site-packages/databricks/sdk/service/jobs.py", line 7352, in create
    res = self._api.do('POST', '/api/2.1/jobs/create', body=body, headers=headers)
  File ".venv/lib/python3.10/site-packages/databricks/sdk/core.py", line 77, in do
    return self._api_client.do(method=method,
  File ".venv/lib/python3.10/site-packages/databricks/sdk/_base_client.py", line 186, in do
    response = call(method,
  File ".venv/lib/python3.10/site-packages/databricks/sdk/retries.py", line 55, in wrapper
    raise err
  File ".venv/lib/python3.10/site-packages/databricks/sdk/retries.py", line 34, in wrapper
    return func(*args, **kwargs)
  File ".venv/lib/python3.10/site-packages/databricks/sdk/_base_client.py", line 278, in _perform
    raise error from None
databricks.sdk.errors.platform.InvalidParameterValue: Invalid compute kind CLASSIC_PREVIEW not supported in job compute.

Other Information

  • OS: macOS
  • Version: databricks-sdk==0.42.0

Additional context
If I remove the kind argument, the error becomes:
databricks.sdk.errors.platform.InvalidParameterValue: Cluster validation error: is_single_node is not allowed with unspecified kind.

@parthban-db parthban-db added the Triaged The issue has been reviewed. Issues without a “Triaged” label require triage/review. label Jan 31, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Triaged The issue has been reviewed. Issues without a “Triaged” label require triage/review.
Projects
None yet
Development

No branches or pull requests

2 participants