-
Notifications
You must be signed in to change notification settings - Fork 560
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No Enchanced Context in AwsProxyRequest (using Spark) #89
Comments
Are you using a POJO handler or a stream-based one with Lambda? The serializer built into the Lambda runtime does not support annotations. The model classes use annotations to read arbitrary fields from the context. My recommendation would be to switch to a stream-based Lambda handler. |
I'm using the lambda (the exact lambda) from the Spark sample project. Here's the signature:
This sounds like a framework bug to me. It looks like that's how I'm supposed to use the framework with Spark. |
I think that's the issue then. I will add a stream sample in the next release of the framework. Unfortunately we cannot rely on Lambda's serialization to read those values, and because the keys for the context values are arbitrary I cannot define a model for them - I need to rely on the The only way to make this work is to change the handler class to implement the I have brought this up with the Lambda team, they may look into it but it's low priority at the moment |
Thanks. I'll take a look at the Spring example. Someone should make a note of this though in the "Enhanced Context" documentation: and the custom authenticator documentation: https://docs.aws.amazon.com/apigateway/latest/developerguide/use-custom-authorizer.html I did a lot of debugging trying to figure out where my values were (because they were showing up in the Python lambda). Thanks, anyway. |
Also, while you're talking to the Lambda team, can you ask them to document how you get the Authorizer out of the Spark Request object in the request handlers. This is how: ((ApiGatewayRequestContext)req.attribute("com.amazonaws.apigateway.request.context")).getAuthorizer().getPrincipalId(); Definitely need to write that down somewhere. |
Here's one possible solution. Since the framework jar isn't sealed, in my own project, I can replace ApiGatewayAuthorizerContext with an identical version in the same package but with my custom attributes added to the class. For example, if I wanted to add the JWT info to the enhanced context in the authorizer lambda (because I've already extracted it there and don't want to do it again in the Spark Java lambda), I could create the com.amazonaws.serverless.proxy.internal.model package in my own project and add: package com.amazonaws.serverless.proxy.internal.model; import com.fasterxml.jackson.annotation.JsonAnyGetter; @JsonIgnoreProperties(
} I've tested this and it works. The main thing I don't like about it is that I might need to update this file if the "real" version of ApiGatewayAuthorizerContext changes in the next framework update. |
The context properties are saved as request attributes. That is a possible solutions, but the cleanest (and recommended) way is to switch to the Stream handler. What's preventing you from using the stream handler? Is there something else the library should do there? |
It looks like the enhanced context properties are coming in as custom json properties on the authorizer object. I wrote this to see exactly what the stream contained: package com.amazonaws.serverless.sample.spark; import java.io.IOException; import com.amazonaws.serverless.proxy.internal.model.AwsProxyResponse; import com.amazonaws.services.lambda.runtime.Context; public class LambdaRawHandler implements RequestStreamHandler {
} and buried in the json returned from the lambda, I see this:
Those attributes were added by me to the context by my authorizer lambda:
I know I can get this by parsing the JWT token again in the Spark lambda. I just didn't want to because I already had the information. I could have added anything else I wanted rather than simply the JWT information. My point is, the information is in the Authorizer object, not the body of my request. I'm not sure how I'd use the inputstream to capture that information other than maybe dumping it all into a raw JSON object, which is inconvenient to access. Perhaps, the RequestHandler (public class LambdaHandler |
So long as you use the Stream Handler like the example, you should be able to access all those parameters. First, in your Spark method, use the Once you have the The authorizer context object contains all of the values, including the custom ones you returned to API Gateway - this is because the object uses the Writing this code off the top of my head here, don't expect it to compile or be 100% correct: get("/pets", (req, res) -> {
ApiGatewayRequestContext ctx = (ApiGatewayRequestContext)req.raw().getAttribute(API_GATEWAY_CONTEXT_PROPERTY);
ApiGatewayAuthorizerContext authCtx = ctx.getAuthorizer();
String picture = authCtx.getContextValue("picture");
}); |
Thanks. I just tried that. It actually does compile, but it looks like those values aren't there. I get null for the all the custom values. This is similar to the first thing I tried, like this: ((ApiGatewayRequestContext)req.attribute("com.amazonaws.apigateway.request.context")).getAuthorizer().getContextValue("picture") That also returns null. |
Oh, but yeah, I'm still using the POJO example, not the stream handler. I guess I need to work out the stream handler equivalent of this:
Mainly, I want to regard that as boiler-plate code and just concentrate on the Spark handlers, but I still want the enhanced context info within the Spark handler request. |
You can literally take the spring sample of the stream handler and just
replace the container handler with the spark one.
S
On Sat, Jan 6, 2018 at 8:04 PM Robert Brown ***@***.***> wrote:
Oh, but yeah, I'm still using the POJO example, not the stream handler. I
guess I need to work out the stream handler equivalent of this:
@OverRide
public AwsProxyResponse handleRequest(AwsProxyRequest awsProxyRequest, Context context) {
if (!isInitialized) {
isInitialized = true;
try {
handler = SparkLambdaContainerHandler.getAwsProxyHandler();
defineResources();
Spark.awaitInitialization();
} catch (ContainerInitializationException e) {
log.error("Cannot initialize Spark application", e);
return null;
}
}
return handler.proxy(awsProxyRequest, context);
}
Mainly, I want to regard that as boiler-plate code and just concentrate on
the Spark handlers, but I still want the enhanced context info within the
Spark handler request.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#89 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AC24XZgON967O3zOlt9qTDUe14y5_26Eks5tIEI4gaJpZM4RVbrz>
.
--
--
Stefano Buliani
@sapessi
|
That did it. I removed my replacement version of ApiGatewayAuthorizerContext. Then I changed the code to be: public class LambdaHandler implements RequestStreamHandler { // rather than RequestHandler<AwsProxyRequest, AwsProxyResponse> and the lambda function to be:
Now, I am indeed getting the enhanced context using: ApiGatewayRequestContext ctx = Thanks for your help. This is great. Amazon really does have the best support in the industry. Could you put this in the sample project or document it somewhere? Thanks again. |
I'm going to keep this open to track the updated samples. Actions items:
|
Closing this since I pushed all pending updates in the last commit. |
I'm passing some enhanced context from my custom authorizer lambda. I can see the context when the request is received by a Python lambda. However, it's missing from the AwsProxyRequest in my Java lambda running Spark: awsProxyRequest.getRequestContext().getAuthorizer().getContextValue("customkey") is null. Also, awsProxyRequest.getRequestContext().getAuthorizer().getClaims() is null.
The text was updated successfully, but these errors were encountered: