-
Notifications
You must be signed in to change notification settings - Fork 657
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
For GCS if size cannot be found use tempfile. #711
Conversation
@harshavardhana let's say the file is 6GB, are we returning success after uploading 5GB? If so then it is not the right thing to do. |
Oh yes, i added a code for that to error out. |
api-put-object.go
Outdated
@@ -170,6 +170,33 @@ func (c Client) putObjectNoChecksum(bucketName, objectName string, reader io.Rea | |||
return 0, ErrEntityTooLarge(size, maxSinglePutObjectSize, bucketName, objectName) | |||
} | |||
|
|||
if size <= -1 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need not do this check and use io.LimitReader() to limit the file, instead we can just wait for the server to return error. Because I think GCS allows much bigger upload in single PUT ( > 5GB ) ... But I am not sure, not able to find documentation. We can check by uploading a large object to GCS and see if it allows > 5GB in single PUT
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure need to check.
api-put-object.go
Outdated
if err != nil { | ||
return 0, err | ||
} | ||
if size > maxSinglePutObjectSize { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note that size will never be > maxSinglePutObjectSize as io.LimitReader will limit it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes this is a wrong check. In-fact we can set the size to the max value and let the underlying reader be sent to the server.
This change allows for the GCS server to reject if needed. For size == -1 we just use `Transfer-Encoding: chunked` For size >= 0 we just use regular upload operation. Related to restic/restic#996
@krishnasrinivas - updated, should have fixed all the concerns and now can do native uploads - and not restrict GCS endpoint. @fd0 - let me know how your test goes.. Please test this change should fix the GCS/prune issue and integration tests as well. |
Works, thank you! |
This is added as a fallback option instead of returning
an error we can just add a limit reader upto 5GB and
attempt to upload the file instead.
Related to restic/restic#996