-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor Bucket Controller and Add Bucket Provider Interface #455
Conversation
55d3a48
to
58188dc
Compare
5fde31b
to
4105490
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wooo, thanks for the follow-up! Using an interface to abstract from the two client is good, and I'm pleased to see thorough tests.
I think you can go further and have all the download-if-matches code in one place, relying on the clients only to do the listing and downloading. More details on that in the comments.
There's one thing that will need fixing, either way: if a file download fails, then the whole operation should fail. I've outlined how to achieve that, in the comments.
I'm suggesting a fairly big change here, I realise. If you're amenable to the suggestion, perhaps we could collaborate on it @pa250194? Either by me being much more specific in my description, or even making a pull request to your fork. |
Hey @squaremo I'm almost done making the changes. We can for sure collaborate on this pull request. I will push my most recent changes and fixes to my branch in a few hours. You can for sure make a pull request to my fork. |
66682e5
to
db1e8db
Compare
Hey @squaremo I have just finished the requested changes. Please let me know if there is anything else I need to change and I will be glad to refactor it 🙂. |
@pa250194 I've made a PR to your PR, which does a bit more shifting code around. See what you think! |
controllers/bucket_controller.go
Outdated
BucketExists(context.Context, string) (bool, error) | ||
ObjectExists(context.Context, string, string) (bool, error) | ||
FGetObject(context.Context, string, string, string) error | ||
ListObjects(context.Context, gitignore.Matcher, string, string) error | ||
ObjectIsNotFound(error) bool | ||
VisitObjects(context.Context, string, func(string) error) error | ||
Close(context.Context) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think it would be better if this moved to pkg/bucket
(or internal/bucket
) and have the implementation specific packages move to pkg/bucket/<impl>
. This creates better separation between the reconciler and the bucket clients, and aligns with the structural changes recently made in #462 and pending in #485.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think it would be better if [the interface] moved to pkg/bucket (or internal/bucket) [...]
This creates better separation between the reconciler and the bucket clients
I don't see how it does that. Then the controller code would be coupled to the package with the interface and the packages with the implementations, since it's the controller that decides which implementation it wants (because it has access to the bucket object). In general it's better to declare requirements in the place they are required.
If all the funcs using a bucket client could be put in a package (e.g., internal/bucket
) without creating a cycle, then it might be reasonable to move the interface (or better: more precise interfaces) there, since that's where the requirement would be. But it would be a bit of an empty gesture, when there are no other consumers of those funcs.
Hey @squaremo sorry for the late response. I am currently occupied by a high priority sprint be will look at the PR and the tests that fail soon. I appreciate the help and contribution to my PR 🙂. |
22a007d
to
83431e2
Compare
Hi! I'm back from holidays. With respect to the client tests: I think it would be reasonable to verify only that the wrappers respond to their altered API correctly assuming that the underlying client libraries work. That way you don't have to use the real services or mock them -- you can just provide a fake client. I'll work on that for the GCP client. |
Although ... the fake service that's there works OK, and it looks like the problem is mainly that the test that fails (https://github.com/fluxcd/source-controller/runs/4543207347?check_suite_focus=true#step:9:952) just needs to be running in GCP or have an environment variable set. The test itself is just checking that the branch at source-controller/pkg/gcp/gcp.go Line 65 in 83431e2
The expedient thing to do may be to come up with a different way to test if the client creation logic works 🤔 ... |
83431e2
to
342e331
Compare
(I rebased to remove the merge commits) |
342e331
to
16b3cbb
Compare
Hey @squaremo just got back from vacation. I will take a look at the test and push a fix for it as I see this is blocking another PR. |
To expand on my comment earlier: I had a look at faking the GCP storage client and it is .. complicated. Mainly because there's a couple of layers of methods that return structs with their own methods, all of which need to be represented as interfaces. So it would be possible, but verbose. If there's a way to test that the construction logic does what it's supposed to (ideally without requiring an external service and credentials), the rest is covered with the fake service. I assume that faking the authentication service is a lot of trouble. One way might be to supply a factory interface to the construction logic, and verify that it's called in the correct way. I'm sure there are other ways. |
I agree faking the GCP client is complicated. I was thinking that because all line 65 - 71 does is use the GCP client. source-controller/pkg/gcp/gcp.go Line 65 in 83431e2
We could probably implement that test in a new PR so we are not blocking the other PR. In the gcp_test.go line 173 is the test that is failing because it is trying to use the gcp client to authenticate using the GOOGLE_APPLICATION_CREDENTIALS env variable. So I will remove that test and we could implement the test with a fix in another PR. source-controller/pkg/gcp/gcp_test.go Line 173 in 83431e2
if that sounds good with you, I can make a push with the changes. |
Yes, I think it's reasonable to defer tests for the construction logic for the minute; the code under test is pretty self-evident (if there's a secret, use that, otherwise let the client do whatever it does ...). |
Hi @squaremo can I get some guidance on how to get the DCO check to pass. I believe other than that this PR should be ready to go. Thank you for your help 🙂 |
You'll need to rewrite the commits that don't have a sign-off (they are listed on the details page for the DCO check). A fairly easy way to do this is with
will bring up an editor with a list of commits since branching from To add your sign-off to a commit, you can do
Then you can do
to keep going through the commits in the list. If you lose track of where you are, Once you've finished, check |
You can fix the merge conflict at the same time, if you pull from the main branch in this repo first then use that as the upstream. |
I think I may have messed up the rebase 🤦🏽♂️ @squaremo |
7d94fa7
to
5ef220d
Compare
- Added Bucket Provider Interface Signed-off-by: pa250194 <[email protected]>
The algorithm for conditionally downloading object files is the same, whether you are using GCP storage or an S3/Minio-compatible bucket. The only thing that differs is how the respective clients handle enumerating through the objects in the bucket; by implementing just that in each provider, I can have the select-and-fetch code in once place. This deliberately omits the parallelised fetching that the GCP client had, for the sake of lining the clients up. It can be reintroduced (in the factored out code) later. Signed-off-by: Michael Bridgen <[email protected]>
This commit reintroduces the use of goroutines for fetching objects, but in the caller of the client interface rather than in a particular client implementation. Signed-off-by: Michael Bridgen <[email protected]>
5ef220d
to
53c2a15
Compare
// Look for file with ignore rules first. | ||
ignorefile := filepath.Join(tempDir, sourceignore.IgnoreFile) | ||
if err := client.FGetObject(ctxTimeout, bucket.Spec.BucketName, sourceignore.IgnoreFile, ignorefile); err != nil { | ||
if client.ObjectIsNotFound(err) && sourceignore.IgnoreFile != ".sourceignore" { // FIXME? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what is the intent of this condition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It gets the file from the bucket and checks if the error returned is Object not found and ensures the error is not thrown for the .sourceignore file because some users may decide not to have sourceignore files in their buckets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean this part of the condition: sourceignore.IgnoreFile != ".sourceignore"
-- this tests that the const sourceignore.IgnoreFile
is not_equal to a literal value. The literal value happens to be the value of the const, at present, so this part -- and the whole condition -- will never be true (the bucket will never be marked unready if the ignore file is missing). If the const value changes in a later revision of code, then it will be equivalent to the first part of the condition (the bucket will always be marked unready if the ignore file is missing).
From your description, I think the intent is to only mark the bucket as unready if there was an error other than the file not being found.
Don't worry -- rebasing takes a lot of practice and it's easy to lose your place, or misstep. It often takes me a few attempts to get a rebase right. I've put things back in order here. This is what I did:
|
Thank you so much for taking out time to explain the steps to me 🙂 |
Sorry about the wait, I'll be picking this up now to fit it into the recent (rigorous) changes in |
As due to the various conflicts I had to fiddle around with the order and state of commits, which I did not want to get lost (historically) here, this is superseded by #596. |
If applied, this PR will refactor the Bucket Controller and add a Bucket Provider Interface that can be implemented by any bucket provider added for the bucket controller. This refactor is as a result of comment in PR #434 (comment)