Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configuration setting so all connection errors are retried #193

Closed
gregingenii opened this issue Jul 22, 2021 · 0 comments · Fixed by #194
Closed

Configuration setting so all connection errors are retried #193

gregingenii opened this issue Jul 22, 2021 · 0 comments · Fixed by #194
Labels
type:enhancement New feature or request

Comments

@gregingenii
Copy link
Contributor

Describe the feature

We're using dbt-spark to run tests on Databricks, and one of our tables has over 150 columns with tests associated. Each time we test at least a few will fail, randomly, due to a connection issue which looks to be transient.
The connector will only retry a connection if it considers the error 'retryable', usually that the cluster is pending start-up, so I would like to add a configuration entry to be able to override this behaviour and retry every connection issue. This would be a boolean we can add to the definition in profiles.yml called 'retry_all'

Who will this benefit?

Anyone that is hitting this random failure issue. DBT is unusable if we can't rely on the testing.

Are you interested in contributing this feature?

I have a fix that I'm using already, so I can open a PR to add this in. I would like to see if everyone thinks this is a good way to resolve this issue, and can adjust accordingly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type:enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants