Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Write a RemoteConfig instance #772

Closed
Tracked by #591
jeromy-cannon opened this issue Oct 30, 2024 · 1 comment
Closed
Tracked by #591

Write a RemoteConfig instance #772

jeromy-cannon opened this issue Oct 30, 2024 · 1 comment
Assignees
Labels
Internal Requirement P0 An issue impacting production environments or impacting multiple releases or multiple individuals.

Comments

@jeromy-cannon
Copy link
Contributor

jeromy-cannon commented Oct 30, 2024

Feature: write a RemoteConfig instance

  The RemoteConfig instance will be written to a Kubernetes ConfigMap.  It will be stored as YAML in the Kubernetes ConfigMap.  The RemoteConfig instance will be converted to JSON that matches the associated JSON schema the JSON will then be written to YAML and stored in the `solo-remote-config` Kubernetes ConfigMap within the namespace and for each cluster contained within the RemoteConfig cluster to namespace mapping.

  Scenario: the RemoteConfig writes successfully to a single cluster deployment
    Given a valid RemoteConfig
    And connections exist for each namespace, cluster, and context
    When write is called for a single cluster deployment
    And validates successfully (stubbed if before #774)
    And the RemoteConfig will be converted to YAML
    Then written as a K8s ConfigMap named `solo-remote-config`

  Scenario: the RemoteConfig fails to validate
    Given a valid RemoteConfig
    And connections exist for each namespace, cluster, and context
    When write is called for a single cluster deployment
    And validate fails (mocked)
    And the RemoteConfig will be converted to YAML
    And fails to write to the cluster
    Then a SoloError will be thrown explaining to the user which deployment/cluster/context failed to be written and the error message to support it
    And inform the user that they will need to manually bring the clusters of the deployment into sync if there is more than one cluster

  Scenario: the RemoteConfig fails to write to the cluster
    Given a valid RemoteConfig
    And connections exist for each namespace, cluster, and context
    When write is called for a single cluster deployment
    And validates successfully (stubbed if before #774)
    And the RemoteConfig will be converted to YAML
    And fails to write to the cluster
    Then a SoloError will be thrown explaining to the user which deployment/cluster/context failed to be written and the error message to support it
    And inform the user that they will need to manually bring the clusters of the deployment into sync if there is more than one cluster

  Scenario: the RemoteConfig writes successfully to one cluster but fails to write to a second cluster
    Given a valid RemoteConfig
    And connections exist for each namespace, cluster, and context
    When write is called for a two cluster deployment
    And validates successfully (stubbed if before #774)
    And the RemoteConfig will be converted to YAML
    And writes the RemoteConfig to the first cluster successfully
    And fails to write to the second cluster
    Then a SoloError will be thrown explaining to the user which deployment/cluster/context failed to be written and the error message to support it
    And inform the user that they will need to manually bring the clusters of the deployment into sync if there is more than one cluster

  Scenario: the RemoteConfig already exists and fails on recreate (delete/create (or patch))
    Given a valid RemoteConfig
    And connections exist for each namespace, cluster, and context
    And an older copy of RemoteConfig exists on the namespace, cluster, and context
    When write is called for to update the RemoteConfig
    And validates successfully (stubbed if before #774)
    And the RemoteConfig will be converted to YAML
    And fails to write to the cluster
    Then a SoloError will be thrown explaining to the user which deployment/cluster/context failed to be written and the error message to support it
    And inform the user that they will need to manually bring the clusters of the deployment into sync if there is more than one cluster


  Scenario: the RemoteConfig already exists and is recreated (delete/create (or patch)) successfully 
    Given a valid RemoteConfig
    And connections exist for each namespace, cluster, and context
    And an older copy of RemoteConfig exists on the namespace, cluster, and context
    When write is called for to update the RemoteConfig
    And validates successfully (stubbed if before #774)
    And the RemoteConfig will be converted to YAML
    Then the RemoteConfig is successfully written to the cluster with the updated RemoteConfig
@jeromy-cannon jeromy-cannon self-assigned this Nov 1, 2024
@jeromy-cannon jeromy-cannon added P0 An issue impacting production environments or impacting multiple releases or multiple individuals. Needs Refinement The issue needs more refinement and/or design before it can be worked labels Nov 1, 2024
@jeromy-cannon jeromy-cannon removed their assignment Nov 4, 2024
@jeromy-cannon jeromy-cannon removed the Needs Refinement The issue needs more refinement and/or design before it can be worked label Nov 4, 2024
@jeromy-cannon
Copy link
Contributor Author

closed in #862

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Internal Requirement P0 An issue impacting production environments or impacting multiple releases or multiple individuals.
Projects
None yet
Development

No branches or pull requests

2 participants