-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reuse configuration + clean up resources upon provision failure #463
Comments
cc: @puicchan |
We do have
|
I think figuring out what to do here is going to require a bunch of deep thinking, there are a lot of things we need to take into account. In general, every resource could end up in a different region (for example, today we take a I don't think it's possible to move already provisioned resources (or if you can it will be service specific) non-destructively, which will make moving from one location to another behind the scenes difficult. A related problem is that we need to make sure when you change the value of the location parameter, what the behavior of the next |
Good points @ellismg , but what you are saying makes me inclined to not touch this ball of complexity with 10-foot pole. Instead, we could just say
And then we just need to figure out if we can established some conventions for our templates that make @savannahostrowski happy. |
When a developer creates an environment and runs
azd provision
/azd up
, they can run into blocking problems that are out of their control (e.g. if the region they selected during environment creation is busy/at capacity).In this case, if the region is busy, we require the user to effectively start over, delete/create a new environment/run
azd down
and try again. This is slow and pretty cumbersome.Instead, it'd be ideal if we could reuse pieces of information that the developer is likely to want to keep the same and make it easy for them to retry their desired action.
For example, this could look like:
azd up
Under the hood, we'd:
azure.yaml
There also might be other cases where this type of retry experience would be helpful.
The text was updated successfully, but these errors were encountered: