-
Notifications
You must be signed in to change notification settings - Fork 406
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support account-level provider with workspace-level resources #3188
base: main
Are you sure you want to change the base?
Conversation
common/resource.go
Outdated
@@ -87,7 +87,10 @@ func (r Resource) ToResource() *schema.Resource { | |||
if r.Update != nil { | |||
update = func(ctx context.Context, d *schema.ResourceData, | |||
m any) diag.Diagnostics { | |||
c := m.(*DatabricksClient) | |||
c, err := m.(*DatabricksClient).InConfiguredWorkspace(ctx, d) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's not enough, you also need to prepend workspace_id to d.Id()
on a read side and on a write side - e.g. fmt.Sprintf("%d:%s", d.Get("workspace_id"), d.Id())
. otherwise there's too much risk involved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This field is marked as ForceNew
, so changes to workspace ID would not trigger update, only delete/recreate. However, we do need to store the old ID somewhere in the state to be able to appropriately move from one workspace to another (as there is no way to get the "old" workspace ID during the delete call).
clusters/data_spark_version.go
Outdated
@@ -131,15 +131,20 @@ func DataSourceSparkVersion() *schema.Resource { | |||
|
|||
s["photon"].Deprecated = "Specify runtime_engine=\"PHOTON\" in the cluster configuration" | |||
s["graviton"].Deprecated = "Not required anymore - it's automatically enabled on the Graviton-based node types" | |||
common.AddWorkspaceIdField(s) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this won't scale, add it on the level of common.Resource
and common.DataSource
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking about how to do this. We don't know which resources are workspace-only or also at the account level. We need some explicit mapping to make sure this isn't forgotten about. I will probably add this at the provider-level to make sure there is an explicit opt-in/out for each resource type.
## Changes This PR introduces `func (a *AccountClient) GetWorkspaceClient(w provisioning.Workspace) (*WorkspaceClient, error)` to make it easy to get a workspace client for a given workspace. The workspace client uses a copy of the config from the account client, reusing the underlying authenticator as well. This allows all workspace clients to use the same underlying token when authenticating with OAuth, decreasing the number of unnecessary token refreshes. Additionally, it allows workspace clients to work with U2M OAuth using the account-level token for customers with Unified Login. Use-case: databricks/terraform-provider-databricks#3188 ## Tests New example passes. - [ ] `make test` passing - [ ] `make fmt` applied - [ ] relevant integration tests applied
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #3188 +/- ##
==========================================
- Coverage 83.57% 83.37% -0.21%
==========================================
Files 168 169 +1
Lines 15021 15198 +177
==========================================
+ Hits 12554 12671 +117
- Misses 1729 1781 +52
- Partials 738 746 +8
|
@@ -252,44 +388,6 @@ func (c *DatabricksClient) FormatURL(strs ...string) string { | |||
return strings.Join(data, "") | |||
} | |||
|
|||
// ClientForHost creates a new DatabricksClient instance with the same auth parameters, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is replaced by InWorkspace()
.
This change would make (almost) all of the problems I'm facing using the Databricks provider go away. Is there any sort of timeline on which we might expect these changes? |
Is this still being worked on? It would solve several issues we're currently experiencing with the provider so it would be great to know if / when this is coming. |
Changes
Partially addresses #2610, #3018.
This change will add an optional
workspace_id
field to all workspace-level resources. When using an account-level provider, if the workspace_id is specified, a workspace client will be constructed reusing that account-level provider's configuration. This means that users of the TF provider will only need to define a singleprovider
block per account and will no longer need to create separate provider blocks per resource. The main downside is that the workspace ID needs to be specified for every single resource. This is slightly worse than today, where users need to specifydepends_on
for all workspace resources that don't have any dependencies.For example:
will work with an account-level provider, given that that workspace with ID
<WSID>
belongs to the account.Supported resources
All workspace-level resources that do not have a workspace_id field will be supported for this customization. The only resources with a workspace_id field are databricks_catalog_workspace_binding and databricks_metastore_assignment. The latter is already supported at the account-level, so you would be able to switch to using an account-level provider for this resource by importing it into your provider configuration.
Migration
To migrate from the current provider to this mechanism:
workspace_id
field to all resources managed at the workspace level.Tests
make test
run locallydocs/
folderinternal/acceptance