-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support AWS-specific Environment Variables #501
Comments
Actually I think considering switching to the new system may be advantageous. Basically the same package could be run in test and then deployed to prod. Build once, and then deploy. That said, one gets the same effect by not using the environments in Zappa, and setting them in AWS with an API call. |
I actually see that as a disadvantage. With Zappa-local envs, you can do something like
I don't think you can do that with AWS vars, can you? Similarly, we also want to avoid any vendor-specific lock-in, so I think this should be prefixed as |
I might be missing something because I just started testing using Zappa, but if Zappa creates a lambda per env, then you can also have separated AWS Lambda env variables, they are not shared between lambdas. |
You can also do that in Zappa with the
|
So I like how the variables are defined in the settings file. However, I don't know the details of how they make it into dev and prod. I thought you have to use Zappa to deploy to both and that each package would be different. In a corporate environment, it would be preferable if the artifact built by Jenkins, using Zappa, was send to dev, and the same artifact, un-modified as sent to production. You could also envision a case where you had it in prod, in one account and later move it. This is how people use Docker now. You create the container and the same container is deployed to dev and prod, but it fed different environment variables to target dev vs production resources. Red adding vendor lock-in, if we are using environment variables, the same can be defined in the container/instance where we could as a Flask app. In google, I think you could target google app engine as a flask app. Not sure how Microsoft's Function as a service works. |
Yeah, that's a fair point. You can achieve that with S3-remote environment variables now: https://github.com/Miserlou/Zappa#remote-environment-variables That being said, Zappa isn't Docker and we don't have the same design goals. But, this seems like a good features to have now that it's offered, somebody just needs to figure out the correct boto spell to make it happen. |
To continue this discussion, I believe having AWS native env vars would be a good feature (complies with the 12-factor app). If nobody is working on this at the moment, I'd like to give it a try. |
Give it a go, Nam! :D
…On Mon, Dec 12, 2016 at 6:10 PM, Nam Ngo ***@***.***> wrote:
To continue this discussion, I believe having AWS native env vars would be
a good feature (complies with the 12-factor app). If nobody is working on
this at the moment, I'd like to give it a try.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#501 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAIi068OewkKG4sW_wm4_PgvAU0ep73rks5rHdR9gaJpZM4LAkJp>
.
|
Just spent a day exploring Zappa, and was able to put in place a workaround for secure AWS-style environment variables that I like. I encrypted my secret token with the AWS CLI and put the base64 ciphertext in the zappa_settings.json file as an environment variable value. Then within the main app module I dropped in code almost verbatim from the AWS Lambda console environment variable encryption helper sample. Your Zappa Lambda execution role has to be a principal with permission to decrypt using your KMS key. Start with a CLI invocation to encrypt your secret. This assumes your CLI profile has permission to encrypt with the KMS key you name in the command: aws kms encrypt --key-id alias/zappa-lambda-key --plaintext "A HUGE SECRET" --query CiphertextBlob --output text Some of the settings file: "dev": {
"app_function": "my.app",
"s3_bucket": "xyz",
"environment_variables": {
"secretkey": "AQECAHjYKJqoct6HkC0Jx9GL9L5pESg5mqOJXDfJGGGb0sEq+AAAAIEwfwYJKoZIhvcNAQcGoHIwcAIBADBrBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDCLke7pLDlNSfU9VRgIBEIA+CanwZCGncSy/3DxQOniW1snzf9gRer9VuK+N/QL+/ABGfvAGz6qTULy/Y+NKCrxhJ9ruJDtNErZIkit8lrk="
}
} And the my.app code: from flask import Flask
import boto3
import os
from base64 import b64decode
# Get environment variables and decrypt them, outside any function context: once per container
env_secret_key_enc = os.environ['secretkey']
env_secret_key = boto3.client('kms').decrypt(CiphertextBlob=b64decode(env_secret_key_enc))['Plaintext']
app = Flask(__name__)
@app.route('/')
def hello_world():
print 'the secret thing is ' + env_secret_key
return 'Hello World!'
if __name__ == '__main__':
app.run() Hope that helps. As an off-topic aside, it took me a while and some zappa module spelunking to figure out that I had to create a file name |
The Lambda provided environment variables are directly available without any changes to Zappa. (They're just environment variables in the execution environment.) Note also that you can do the encryption in the AWS Console directly by using the "encryption helper" option to handle KMS encryption. Putting those two facts together, I created a KMS key and ensured the ZappaLambdaExecution role had permission to decrypt with it. Then I created an environment variable in the deployed lambda, clicked the "encrypt" button and saved. The same Python code provided by @beerobber then worked to decrypt the secret (without modifying the settings file or using the aws cli). What does it really mean to have support for Lambda variables in Zappa then? Perhaps it's just to ensure that the variables and values from the settings file are created as Lambda variables and hence visible to the web Lambda console (and API)? |
I can't access the environment variables and I followed the instructions, I don't know why. So I modify the source, add some code to support AWS-specific environment variables, and now I can access them. I've created a pull request at #600 |
Fixed by merged PR. |
We already have "local" and (S3) "remote" envvars, so this may be unnecessary, but we could also support AWS environment variables for completeness.
The text was updated successfully, but these errors were encountered: