-
Notifications
You must be signed in to change notification settings - Fork 275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
POC for S3 API experiments (Not to be checked in) #2664
Conversation
This handles: 1. PUT, GET, DELETE, HEAD, LIST 2. Multipart uploads 3. Also made changes in frontend.properties file to start frontend on ssl port and talk to mysql
f8a59fc
to
399af24
Compare
I'm thinking of adding a enum of AmazonS3_Action For example, followings are related to multiple part upload.
At the early stage like somewhere at preProcessAndRouteRequest And we also generate the related parameter per each type. For example, class S3CreateMultipleUploadContext. |
// S3 can issue "HEAD /s3/named-blob-sandbox" on the bucket-name. | ||
// The converted named blob would be /named/named-blob-sandbox/container-a. So, don't check for number of expected | ||
// segments for S3 as of now | ||
/* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the case we hit the exception?
After the naming translation from S3 to "named",
- For Get, we do have four parts, "named", account, container and key.
- For list, we have three parts, "named", account, container. At least for flink case, looks like it's still true. If we support list under any key, this is another case.
One small issue I saw was that for listing, the path is "/named/named-blob-sandbox/container-a/". With the ending "/", we split it to four parts. But if we trim the ending "/", it works fine.
import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlProperty; | ||
|
||
|
||
public class InitiateMultipartUploadResult { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably we can follow S3 naming:
CreateMultipartUploadResponse
// We convert it to named blob request in the form "/named/named-blob-sandbox/container-a/checkpoints/87833badf879a3fc7bf151adfe928eac/chk-1/_metadata" | ||
// i.e. we hardcode container name to 'container-a' | ||
|
||
logger.info("S3 API | Input path: {}", path); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As another comment, probably we can have one function to parse all S3 command and generate context for each S3 commands.
@@ -541,7 +556,7 @@ public static BlobProperties buildBlobProperties(Map<String, Object> args) throw | |||
* @throws RestServiceException if required arguments aren't present or if they aren't in the format expected. | |||
*/ | |||
public static long getTtlFromRequestHeader(Map<String, Object> args) throws RestServiceException { | |||
long ttl = Utils.Infinite_Time; | |||
long ttl = 86400; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did we ever discuss the life cycle management with flink and TiKV team?
We ever discussed for S3, do we need dedicated handler for it? |
No description provided.