Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ExtendedKustoClient: Some extents were not processed and we got an empty move result'1' Please open issue if you see this trace. At: https://github.com/Azure/azure-kusto-spark/issues #375

Open
liangchenmicrosoft opened this issue May 1, 2024 · 1 comment

Comments

@liangchenmicrosoft
Copy link
Member

Describe the bug
We are using Synapse Spark to write data into Kusto table in Python. When we enable 'drop-tag' with SparkIngestionProperties, we will see below error message in Synapse Spark.

2024-05-01 22:16:56,492 INFO TokenLibrary [Timer-14474]: Obtained Access token from cache
2024-05-01 22:16:56,532 INFO Utilities$ [Executor task launch worker for task 0.0 in stage 3.0 (TID 6)]: Trying to determine if cluster type
2024-05-01 22:16:56,532 INFO TokenLibrary [Executor task launch worker for task 0.0 in stage 3.0 (TID 6)]: Attempting to get params from node config
2024-05-01 22:16:56,533 INFO TokenLibrary [Executor task launch worker for task 0.0 in stage 3.0 (TID 6)]: Call to get Access token
2024-05-01 22:16:56,533 INFO TokenLibrary [Executor task launch worker for task 0.0 in stage 3.0 (TID 6)]: Number of callers waiting for lock_token to access token service= 0
2024-05-01 22:16:56,533 INFO InMemoryCacheClient [Executor task launch worker for task 0.0 in stage 3.0 (TID 6)]: Token successfully fetched from in-memory cache
2024-05-01 22:16:56,533 INFO TokenLibrary [Executor task launch worker for task 0.0 in stage 3.0 (TID 6)]: Obtained Access token from cache
2024-05-01 22:16:56,580 FATAL KustoConnector [Executor task launch worker for task 0.0 in stage 3.0 (TID 6)]: ExtendedKustoClient: Some extents were not processed and we got an empty move result'1' Please open issue if you see this trace. At: https://github.com/Azure/azure-kusto-spark/issues
2024-05-01 22:16:56,580 INFO Utilities$ [Executor task launch worker for task 0.0 in stage 3.0 (TID 6)]: Trying to determine if cluster type
2024-05-01 22:16:56,580 INFO TokenLibrary [Executor task launch worker for task 0.0 in stage 3.0 (TID 6)]: Attempting to get params from node config
2024-05-01 22:16:56,580 INFO TokenLibrary [Executor task launch worker for task 0.0 in stage 3.0 (TID 6)]: Call to get Access token
2024-05-01 22:16:56,580 INFO TokenLibrary [Executor task launch worker for task 0.0 in stage 3.0 (TID 6)]: Number of callers waiting for lock_token to access token service= 0
2024-05-01 22:16:56,581 INFO InMemoryCacheClient [Executor task launch worker for task 0.0 in stage 3.0 (TID 6)]: Token successfully fetched from in-memory cache
2024-05-01 22:16:56,581 INFO TokenLibrary [Executor task launch worker for task 0.0 in stage 3.0 (TID 6)]: Obtained Access token from cache
2024-05-01 22:16:56,626 INFO Utilities$ [Executor task launch worker for task 0.0 in stage 3.0 (TID 6)]: Trying to determine if cluster type
2024-05-01 22:16:56,627 INFO TokenLibrary [Executor task launch worker for task 0.0 in stage 3.0 (TID 6)]: Attempting to get params from node config
2024-05-01 22:16:56,627 INFO TokenLibrary [Executor task launch worker for task 0.0 in stage 3.0 (TID 6)]: Call to get Access token
2024-05-01 22:16:56,627 INFO TokenLibrary [Executor task launch worker for task 0.0 in stage 3.0 (TID 6)]: Number of callers waiting for lock_token to access token service= 0
2024-05-01 22:16:56,627 INFO InMemoryCacheClient [Executor task launch worker for task 0.0 in stage 3.0 (TID 6)]: Token successfully fetched from in-memory cache
2024-05-01 22:16:56,627 INFO TokenLibrary [Executor task launch worker for task 0.0 in stage 3.0 (TID 6)]: Obtained Access token from cache

To Reproduce
Steps to reproduce the behavior:

Expected behavior
A clear and concise description of what you expected to happen.

This error shouldn't happen with drop-tag property enabled in SparkIngestionProperties.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

@ag-ramachandran
Copy link
Contributor

Hello @liangchenmicrosoft
Will have a look for that , I would need the clusterURL, the spark runtime version and the options you are using to troubleshoot the issue. You can message that on my IM handle ramacg at ms

While you do that

please try and use

.option("writeMode","Queued")

and test this as well, this was a relatively new change that was added in the connector to overcome some limitations. Please use this and let us know if it works

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants