-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generalize the hardware client interface #81
Generalize the hardware client interface #81
Conversation
0b2c350
to
5cd325b
Compare
Codecov Report
@@ Coverage Diff @@
## main #81 +/- ##
==========================================
+ Coverage 50.71% 50.83% +0.12%
==========================================
Files 3 3
Lines 418 417 -1
==========================================
Hits 212 212
+ Misses 175 174 -1
Partials 31 31
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm loving the push towards Kubernetes as a back end. Thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One small nit, otherwise lgtm. Thanks!
5cd325b
to
001aff6
Compare
The hardware client is used to interact with a Cacher service and the Tink service. It embodies gRPC specific types and an ambiguous All() call that are difficult to model for a Kubernetes data provider. This change alters the interface to model more generic behavior and removes gRPC specific types paving the path for implementing a Kubernetes data provider client. Signed-off-by: Chris Doherty <[email protected]>
001aff6
to
06a4519
Compare
## Description Builds on #95 (rebased after PR was created) This feature builds on #81 and realizes the Hegel portion of the [Kubernetes back-end proposal](https://github.com/tinkerbell/proposals/tree/main/proposals/0026). Having generalized the client used to acquire metadata, this PR implements a Kubernetes client that leverages an internal client caching mechanism so as to reduce load on the Kubernetes API Server. Additionally, I put some effort into reworking the Hegel Dockerfile so it could leverage layer caching for Go module dependencies and added an image build command to the Makefile. The `Watch` API for the client is unimplemented and scheduled for a follow up. This is so we can get off the ground for machine provisioning. The `Watch` API is used for subscription capabilities so doesn't feature in metadata retrieval. ## How Has This Been Tested? Built and ran the image with the Kubernetes back-end enabled talking to a KinD cluster containing the necessary CRDs and a `Hardware` resource. ## How are existing users impacted? What migration steps/scripts do we need? None. Users enable the Kubernetes back-end by setting the `DATA_MODEL_VERSION=kubernetes` plus some additional env vars. ## Future work - Implement the `Watch` API for consistency with other back-ends.
Context
To compliment the Kubeification of Tinkerbell's back-end we need to provide a mechanism for supplying hardware data from the Kubernetes API Server in Hegel.
Changes
The hardware client is used to interact with a Cacher service and the Tink service. It embodies gRPC specific types and an ambiguous All() call that are difficult to model for a Kubernetes data provider.
This change alters the interface to model more generic behavior and removes gRPC specific types paving the path for implementing a Kubernetes data provider client that can plug into existing business logic.
This change doesn't alter any service behavior.