-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Live resource view #267
base: main
Are you sure you want to change the base?
Conversation
d83a0f9
to
a7c58b7
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #267 +/- ##
==========================================
+ Coverage 48.20% 48.41% +0.21%
==========================================
Files 55 72 +17
Lines 5066 6219 +1153
==========================================
+ Hits 2442 3011 +569
- Misses 2424 2949 +525
- Partials 200 259 +59 ☔ View full report in Codecov by Sentry. |
262d0e2
to
ad9b02d
Compare
Signed-off-by: jannfis <[email protected]>
ad9b02d
to
1bd19ed
Compare
Signed-off-by: jannfis <[email protected]>
Signed-off-by: jannfis <[email protected]>
Signed-off-by: jannfis <[email protected]>
Signed-off-by: jannfis <[email protected]>
Signed-off-by: jannfis <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a couple of small questions. Everything else looks good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great @jannfis, posted my thoughts below on anything that jumped out at me while working through the PR.
And one final question: I didn't see any E2E-style tests for the feature, what are your plans here?
Signed-off-by: jannfis <[email protected]>
Signed-off-by: jannfis <[email protected]>
Signed-off-by: jannfis <[email protected]>
Signed-off-by: jannfis <[email protected]>
Signed-off-by: jannfis <[email protected]>
Signed-off-by: jannfis <[email protected]>
Signed-off-by: jannfis <[email protected]>
Signed-off-by: jannfis <[email protected]>
Signed-off-by: jannfis <[email protected]>
if err != nil { | ||
logCtx.Errorf("could not request resource: %v", err) | ||
status = err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the err here is not nil, based on my understanding, we were not able to fetch any resources. Should we be returning the error from here itself?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm. In my thought process, it is not an error condition for processIncomingResourceRequest
itself when we could not fetch the resource from the local cluster. We rather need to let the other side know, that resource fetching failed (and the reason, e.g. resource not found or access forbidden).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense. Continuing on the same scenario, would the json marshaling (line 129 below) be considered more of an internal error or should we be returning that to the caller as well? My thought is that, we might loose sending the response in scenarios where we find errors in json marshaling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good question, and one that could be asked about any error condition in that particular method.
For now, I think the answer is: We want to transport eventual status codes from the communication between the agent and Kubernetes API back to the principal, but nothing else. This is because the principal can then pass that status code on to the consumer of the resource proxy.
We could go ahead and send HTTP 500 status codes back to the principal whenever something goes wrong in processIncomingResourceRequest
, then the question is, should we? I have not yet found a good answer to that question. The principal will notice that no reply has been sent from the agent anyway, and will let its consumer know.
Signed-off-by: jannfis <[email protected]>
Signed-off-by: jannfis <[email protected]>
@jgwest I have not yet thought about e2e tests tbh. Agree we need to have some. Will work on them together with the corresponding CLI PR that's in the queue. |
What does this PR do / why we need it:
This PR introduces functionality to provide live resource view from the agents to the control plane. There are still a few caveats with this, and some things to sort out.
The general architecture is documented here.
It's also the pre-requisite for other upcoming features, such as log view, resource actions and resource manipulation, this is why I'd like to have the code in early.
The CLI components to manipulate cluster secrets will be a follow-up PR.
Which issue(s) this PR fixes:
Fixes #?
How to test changes / Special notes to the reviewer:
The CLI code for managing cluster secrets will be submitted in a different PR, thus, the code in here is potentially difficult to test.
Checklist