-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Kubernetes provider #3132
Add Kubernetes provider #3132
Conversation
r.e. passing multi-line cert/key information to providers in the environment, for the Chef provider I started in #3084 I went with the approach of making the property a string that takes the key material itself but making the environment variable be the path to a file to read. As you noted, this requires some extra logic to handle the env var, but not much extra and I think it could generalize well to being reusable in helper/schema like To me, this felt like the most pragmatic compromise. |
I am surprised and pleased to see this being worked on, but is there precedent for using Terraform to manage software like this? I'd understood Terraform to be for managing cloud resources themselves but not for trying to manage the software that runs on them in anything other than a vary trivial capacity. The idea of having a Terraform resource for something as high level as |
@jimmycuadra The theoretical boundary (as designed) for Terraform was always "schedulers" in the most abstract sense. EC2, Azure, SaaS, PaaS, etc. are all schedulers. EC2 schedules "machines", Heroku schedules "apps" and "services", etc. It makes total sense in this world to include something like Kubernetes, since in practice we find that with any scheduler boundary, you usually have a need for a tool like Terraform to manage resources within that scheduled enviroment (while Terraform manages resources, the schedulers places those resources). |
Ah, I see. I think the key distinction that I missed is that this provider is intended to manage the actual resources a k8s cluster provides using an existing cluster (e.g. GKE, the hosted service from Google Cloud Platform) rather than to stand up the k8s cluster itself. |
With regard to the specs the K8S server returns - if I'm understanding |
That's exactly what I have in mind 😃 Some things will be easy - string/int attributes, some will be a bit more difficult, e.g. pod automatically gets a new default volume with secrets for accessing Google APIs. That will need to be ignored in an array of user-defined volumes. I'm not sure I'll be able to make this generic enough, so there's just one function to rule them all, but it's ideally the goal. |
Ah, I see what you mean! If you want to do it generically, you could write your own DeepEqual that only compares elements found in the first argument. Unfortunately I can't find anything in the standard library that will do that for you. |
I was more thinking of using |
I know that things like arrays won't be addressable in any generic way (I will have to be resource-specific in filtering arrays), but everything else could be. |
I think if you model your compare function after DeepEqual you don't need to check/hardcode specific resources. Instead, check that every element in one argument (an arbitrary user config) exists and matches that in the other (k8s provided state) recursively. This also works for elements of array or splice type, with a little extra logic handling the fact that elements in one array may occur in a different index in the other array. Normally I would rely on the reflect package as well, but I don't see that it provides a fine grained enough way to iterate over arbitrary types. |
I don't know what the custom is here since Go doesn't support function overloading, but it might be worthwhile to rewrite the (expand/flatten/normalize)*Spec functions to avoid all the duplicated code. Also - I see that the K8S API prefers to read the auth config out of a separate json file. Seeing that you're reading most of the K8S specs out of non-Terraform config files, is there a reason why you decided to do it differently for the provider? What are your thoughts on adding some I might be wrong, but I don't think you can have |
I will have a look into that, but I'm afraid I'll be balancing between clean (duplicated, but readable) code and long single function that's hard to understand. I'm not sure the abstraction will bring much benefit, but I might be wrong.
I totally don't mind adding support for JSON file, but I'd like to keep the config options separated, so it works well with If it was only supporting JSON file, then workaround would has to be using
There's definitely bunch of attributes that would be worth exposing - e.g. hostname/IP address of a service. I will look into that. This is something I consider easy & quick to solve compared to all other things on the list and other things you mentioned though.
I'm not sure whether it's intention or a bug, but you can set a resource quota for the default namespace - hence it's optional. See related acceptance test for ResourceQuota - it works. |
+1 CompareFunc is really nice. Some providers do strange things with whitespace, so this would let us add a CompareFunc trim(a) == trim(b) 😄 |
I've been thinking it might make more sense to move all the of the attributes in the YAML K8S config files into the Terraform config files. While it will be a bit of work, and it means we have to update the K8S resources as their APIs are improved, we don't get any of the benefits of the Terraform config language as long as the entire K8S resource config is separate from Terraform resource config. If this sounds good to you, I'll move whichever attributes you need, but we should think about where we want to edit your existing code. Your Terraform fork would probably be easiest. |
@lwander That's changing the direction completely. When I was submitting ECS, I was first thinking about this too, but then I decided not to break this down. It would solve some of the mentioned issues, but it also breaks any compatibility we now have with kubectl and potentially other tools that people are already using to manage K8S cluster. I'm frankly not too inclined to that approach, because any demo that we can now find will likely consist of bunch of YAML/JSON files, that could be just very easily launched via terraform with a little bit of DSL. Could any core maintainer share opinion on this, please? @mitchellh @phinze |
If you are launching some stateless web app on Kubernetes and you need it to depend on some database cluster defined also in Terraform with instance resources, can you do that with YAML strings in the schema? If not, I think that would be a major loss. I don't know if a similar case exists for ECS. Am I understanding correctly that you're saying that it would be hard to take existing k8s configs and port them to Terraform because one would have to convert from YAML/JSON to HCL? There may be other ways to address that, like building a little third party tool to convert. Another option might be an interpolation syntax that can parse JSON/YAML (from a string) and turn it into structured HCL. That may be useful in other cases too. Also people (for example me) often use the JSON mode of Terraform, and these large strings I think not mesh well with that approach. |
MItchell's comments in #3026 are relevant, but I think in this case we are really dealing with structured data that has a given schema, not just "arbitrary JSON" which Terraform doesn't have much ability to process and thus may as well just be a string. |
Also, at least both the Container (inside the PodSpec)and ServiceSpec contain fields that cannot be changed by updates. Since you don't have either spec marked as |
Over in #3124 Radek and I were debating a similar thing for IAM Policy JSON. I'd proposed a compromise where we have the various policy properties take the raw JSON (with some normalization, of course) and then have a separate logical resource to help in producing that JSON. Radek didn't agree with me there of course, so maybe he'll also disagree with me here. 😀 My rationale was that you could in principle generate the JSON using the I like the fact that this approach gives you the option of using a pre-existing file if you want, but gives you an easy way to generate a config on the fly if that's more appropriate for your needs. The downside is that it's one more resource to document, and we'd need to explain well in the docs what the difference is between the two so users are not so confused. |
Agreed What I'm also realising is that we might be able to use
I think after all you convinced me Martin, my concerns about separate IAM document resource were a bit different than here though. Pods, services etc. are still pretty much mapping to the API, that IAM document will be more like The reason I'm raising this question and need more voices on either side is when a user comes to ask "why did you design it this way and not the other?", we can just point him/her to this thread and/or use the valid arguments you are all mentioning. If nobody else comes in with a different opinion over the weekend, we can start working on expanding it and I'll be now much happier doing so now than before, since we have enough arguments. 😉 |
3a3463e
to
ae7c9df
Compare
Before carrying on with the work on this PR, it would help to have https://github.com/hashicorp/terraform/issues/2275 implemented. |
As we discussed recently with @lwander & @sparkprime , I resubmitted this under #3453 |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Status (TL;DR)
kubernetes_limit_range
resource + acceptance testkubernetes_namespace
resource + acceptance testkubernetes_persistent_volume
resource + acceptance testkubernetes_persistent_volume_claim
resource + acceptance testkubernetes_pod
resource + acceptance testkubernetes_replication_controller
resource + acceptance testkubernetes_resource_quota
resource + acceptance testkubernetes_service
resource + acceptance testKUBERNETES_CLIENT_CERTIFICATE
/KUBERNETES_CLIENT_CERTIFICATE_PATH
etc.kubernetes_namespace.finalizers
kubernetes_limit_range
kubernetes_persistent_volume
kubernetes_pod
kubernetes_replication_controller
kubernetes_service
Config ENV vars (w/ and w/out
_PATH
)I think the config options should stay as are (accepting keys & crts contents as strings) to provide direct compatibility w/
google_container_engine
, yet allowing people to usefile()
. This is however less convenient for ENV variables since keys & certs are typically multi-line strings and that's a bit tricky to handle in ENV vars. Hence I guess_PATH
with some extra logic to read files wins here, but I'll leave this open for a while.schema.CompareFunc
- WhyCreation / deletion of all resources works aok, but the problem is that K8S server returns specs with some extra attributes that we have not defined when sending it to the API originally and these are often not predictable (e.g. node IP address).
I experimented with a few ideas. I decided not to strip attributes inside state (via
StateFunc
), because I still think that user should be able to reference the real spec from server elsewhere. Hence I createdschema.CompareFunc
and I'm now trying to come up with a meaningful/universal way of stripping attributes that user has not defined in the given spec, so that we can only compare what has been defined, yet still save the original spec into state.Mind the test suite
It will also demonstrate the issue with spec as described above.
cc @sparkprime @lwander