-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge new-deployer branch into master #763
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The existing deployer module is starting to get out of hand. Initially it only had to handle API gateway and a single Lambda function, but over time we've added support for AWS resources, and continue to add additional resources (#516). I don't think the existing deployer design is going to hold up as all these new resources are being added. The code is hard to understand, hard to use, and hard to modify. There's also a number of reported customer reported bugs from the existing deployer. Digging further into the deployer code, there are a number of inconsistencies as well as unexpected and unintuitive behaviors. I've been experimenting with a new redesign of the deployer to not only fix the existing issues, but also make future changes easier. This initial POC only supports `@app.lambda_function()` and only sketches out how to do the initial creation of resources, but I think this should be able to handle the remaining gaps. To help with initial testing, I've added a temporary `deploy-new` command so both deployers can live side by side during development. Before this gets merged to master, `deploy-new` will just become `deploy` and the old code will be removed. The `newdeployer` module docstring has a description of how the new system works, and is a good starting point for a technical overview.
Per review feedback.
Per review feedback, less chance of collision.
As per review feedback
For clarity, as per review feedback
This is the bare minimum changes need to support updating. A subsequent commit will refactor this into a more generalized pattern.
No functional changes, just minor code cleanup to make the tests easier to follow.
Verified it's not needed. You can just call put_role_policy to update a policy.
This commit adds the idea of a "remote state object" that's used by the planner to determine if an update is needed. In some ways you can think of this as a higher level `TypedAWSClient` that's more resource based and bridges the `models` module with an AWS client.
We're recreating the list each time, but the number of resources is small enough for now that it doesn't matter. May need to circle back on this in the future.
Should be incorporated before merging to master.
As part of this work, general improvements to the executor and deployer have been added that fixed several issues with the update process. You can now add new lambda functions to your app.py and the deployer will properly record these values in the deployed.json. There are a few high level changes included. * The deployed.json was updated to have a consistent format for all resources. All resources now go under a ``resources`` key that's specific to a stage. Each resource must have a ``resource_type``. This allows us to add support for new resource types without having to worry about updating the deployed.json file. To handle the old format I've added a ``schema_version`` key. I'll still need to add some backwards compatibility code that can load the v1 format. For now you can't mix the v1 and v2 formats. This will need to be addressed before merging to master. * The APICall has been split out into 6 new instructions and the planner and executor now take a list of Instructions. An APICall is now a type of Instruction. This level of granularity was needed to properly track which resources we've accounted for locally so we can compare them to the set of remote resources. * One of the new instructions, ``JPSearch``, requires a dependency on JMESPath. This was required in order to properly generate the deployed.json file. The ``create_function`` call returns a function arn, but the ``update_function`` returns the entire response of ``update_function_code``. We need to extract out the ``FunctionArn`` key from the response. We'll also need this capability in the future. In practice this isn't a big deal, botocore already requires JMESPath, we're just also calling out an explicit dependency in our setup.py.
Removes chain of if/else statements.
Based on feedback that Sweeper wasn't a clear enough name.
RecordResource - Value comes from top of stack RecordResourceVariable - Value comes from variable RecordResourceValue - Value provided as instruction arg
The list order is the order in which they are stored in the executor. This is effectively the creation order. This also makes it possible to know what order to delete resources in the future.
The basic idea is to rely on the sweeper code to handle deletions. Conceptually this is similar to creating an empty chalice app and running `chalice deploy`. The other alternative was create a new planner type that specifically handled deletions. The reason for leveraging the sweeper for deploys is two fold: 1. We have to write sweeper logic anyways. This was originally to handle the case where you remove resources from your app.py and rerun `chalice deploy`. This logic has to stay regardless of what's decided for deletion 2. A deletion planner would unnecessarily have to take into account what's currently specified in your app.py along with the remote state (stuff that's already been deployed). This is unnecessary complication. For deletion we only care about what exists remotely. Another benefit of (2) is that we handle an edge case where you remove resources from your app.py but don't run `chalice deploy`. In the old deployer code, we base deletion based on your app.py so we'd essentially leak a resource. This is now handled by this new deletion code because it's always based on the deployed.json file.
The only significant change from the existing deployer is that the old deployer treated a scheduled event as a single resource whereas the new deployer treats it as two separate resources: a lambda function and a cloudwatch event. The main benefit of this approach is that you can reuse existing logic for building/planning a lambda function.
Rather than provide dummy args, just don't require additional args in the __init__ of the CFNSwaggerGenerator.
This adds a new RestAPI model as well as the needed code to deploy/update/delete an API Gateway RestAPI. I had to add new opcodes to get this working: * BuiltinFunction - needed to parse ARNs * LoadValue - set a value to a var name
This was intended as a way to deal with values we didn't know until a deployment happened. However, to make the create and update code paths consistent, this is all done via the deployed values mechanism as well using a load/store instruction. In addition to making creates/updates consistent, it also handles the case where you can't use an API call to know the appropriate value to use. An example of this is the rest api id, where the onyl way to know the correct ID to use is from the deployed.json file.
* nd-the-renaming: Rename DeployedResources2 to DeployedResources Remove old deployed resources Fix integration tests with per-stage deploy files Move newdeployer to deployer module Move validation code over to separate module Remove references to old deployer and tests Remove create_default_deployer Replace deploy()/delete() with new deployer Move ApplicationPolicyHandler to newdeployer Remove string_format for StoreValue Remove TODO about proper error handling Raise RuntimeError on unknown instruction Move TemplatedSwaggerGenerator to swagger.py Remove swagger TODO
Previously, we returned Optional[DeployedResources], which made usage awkward because you always had to check if deployed_resources was not None. With this change, if there has been no deployment, we just create an empty DeployedResources. Consuming code now needs to catch a `ValueError` if they try to access a resource that was not deployed.
When upgrading to the new deployer and deploying to a new stage, we now return an empty DeployedResources. This was previously raising a KeyError because we assumed the stage exists in the old deployed.json. Repro steps: chalice deploy # Old deployer chalice deploy --stage prod # New deployer
This gives parity with the old deployer's error messages when we try to load a policy file that does not exist. Repro: chalice deploy --no-autogen-policy There's room for improvement for the error messages, but for now this puts back the old deployer error message.
* nd-no-optional: Return empty deployed values when stage does not exist Create empty deployed resources when no deployment exists
* nd-fix-policy-loading: Raise RuntimeError when unable to load IAM policy file
* nd-fix-pre10-upgrades: Use correct chalice version Handle upgrading 0.10.0 to the new deployer format
This brings back the original exceptions (and their corresponding helpful error messages) from the old deployer. You'll now get chalice specific error messages when you exceed the max deployment size or hit lambda timeouts.
* nd-more-cleanups: Fix lint issues in tests Raise ChaliceDeploymentError's on failed deploys Rename create_new_default_deployer to create_default_deployer
I think we might want to generalize this in the future instead of having a hard coded call to validate_configuration(), but for now this gives us parity with exactly what the old deployer does.
* nd-add-validation: Add back config validation during deployment
* nd-changelog: Add note about downgrades Add changelog/release notes for 1.2.0
kyleknap
approved these changes
Mar 23, 2018
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚢
stealthycoin
approved these changes
Mar 26, 2018
jamesls
added a commit
that referenced
this pull request
Mar 26, 2018
This branch contains the rewrite of the Chalice deployer. This will allow us to easily add support for AWS resources along with more advanced deployment features that were difficult in the old deployer Tracking Issue: #604 PR: #763 * new-deployer: Add note about downgrades Add changelog/release notes for 1.2.0 Add back config validation during deployment Fix lint issues in tests Raise ChaliceDeploymentError's on failed deploys Rename create_new_default_deployer to create_default_deployer Use correct chalice version Handle upgrading 0.10.0 to the new deployer format Raise RuntimeError when unable to load IAM policy file Return empty deployed values when stage does not exist Create empty deployed resources when no deployment exists Rename DeployedResources2 to DeployedResources Remove old deployed resources Fix integration tests with per-stage deploy files Move newdeployer to deployer module Move validation code over to separate module Remove references to old deployer and tests Remove create_default_deployer Replace deploy()/delete() with new deployer Move ApplicationPolicyHandler to newdeployer Remove string_format for StoreValue Remove TODO about proper error handling Raise RuntimeError on unknown instruction Move TemplatedSwaggerGenerator to swagger.py Remove swagger TODO Add deployment reporter Add rest_api_url to deployed values Add backend key to deployed.json Update deployed json files to not include stage Move recording deployed values into deployer Upgrade old deploy format on new deploy Raise exeption when deployed resource does not exist Use app-stage for api handler function name Extra out add results to plan to separate method Fix py3 test failures Add newlines to delete messages Add back print messages during deployment Update planner to use Plan object Don't create session twice Fix integ tests to use new deployer Add support for built in authorizers in new deployer Remove stack from executor Incorporate review feedback so far Incorporate review feedback Add role_name to deployed values Refactor remote state to use dynamic dispatch Remove role_arn from LambdaFunction model Remove resource from APICall, not needed Add support for deploying rest APIs Don't require unnecessary arguments Add support for scheduled events in newdeployer Incorporate review feedback Add delete-new command Change deployed.json from dict to list of resources Split record resource instructions into 3 Rename sweeper to UnreferencedResourcePlanner Restructure if statement Use method dispatch for executing instructions Add support for deleting unreferenced resources Add TODOs for all pending review feedback Add back check for duplicate items in ordered list Refactor update code to use remote state object Remove delete_role_policy call Refactor planner unit tests Extract planner out to separate module Move APICall to models Initial commit of updating resources Rename resource builder to application builder Add missing type definition Rename DeployPhase to Placeholder Validate all DeployPhase vars have been handled Add TODO about deployer errors Rename output to target_variable for clarity Use more description _lambda_arn suffix Rename class to ApplicationGraphBuilder Initial POC of new deployer
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR merges the "new-deployer" feature branch back into master. At this point I believe we have parity with the existing deployer along with all the new functionality we've wanted to add. More info is available in the tracking issue #604.
Closes #604
Closes #162 #640