forked from kubernetes/kops
-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Customize Kops Platform #1
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
changes are detected and marks nodes with 'NEEDUPDATE'
A while back options to permit secure kube-apiserver to kubelet api was kubernetes#2831 using the server.cert and server.key as testing grouns. This PR formalizes the options and generates a client certificate on their behalf (note, the server{.cert,key} can no longer be used post 1.7 as the certificate usage is checked i.e. it's not using a client cert). The users now only need to add anonymousAuth: false to enable secure api to kubelet. I'd like to make this default to all new builds i'm not sure where to place it. - updated the security.md to reflect the changes - issue a new client kubelet-api certificate used to secure authorize comms between api and kubelet - fixed any formatting issues i came across on the journey
The current implementation does not make it ease to fully customize nodes before kube install. This PR adds the ability to include file assets in the cluster and instaneGroup spec which can be consumed by nodeup. Allowing those whom need (i.e. me :-)) greater flexibilty around their nodes. @note, nothing is enforced, so unless you've specified anything everything is as the same - updated the cluster_spec.md to reflect the changes - permit users to place inline files into the cluster and instance group specs - added the ability to template the files, the Cluster and InstanceGroup specs are passed into context - cleaned up and missed comment, unordered imports etc along the journey
The current implementation does not put any transport security on the etcd cluster. The PR provides and optional flag to enable TLS the etcd cluster - cleaned up and fixed any formatting issues on the journey - added two new certificates (server/client) for etcd peers and a client certificate for kubeapi and others perhaps (perhaps calico?) - disabled the protokube service for nodes completely is not required; note this was first raised in kubernetes#3091, but figured it would be easier to place in here given the relation - updated protokube codebase to reflect the changes, removing the master option as its no longer required - added additional integretion tests for the protokube manifests; - note, still need to add documentation, but opening the PR to get feedback - one outstanding issue is the migration from http -> https for preexisting clusters, i'm gonna hit the coreos board to ask for the best options
The current 'kops replace' fails if the resource does not exist, which is annoying if you want to use the feature to drive your CI. This PR adds a --create option to create any resource which does not exist. At the moment we limit this to instanceGroups only. I'd also like to see this command perhaps be renamed to kops apply?
The current implementation does not permit the user to order the hooks. This PR adds optional Requires, Before and Documentation to the HookSpec which is added the systemd unit if specified.
The present implementation of hooks only perform for docker exec, which isn't that flexible. This PR permits the user to greater customize systemd units on the instances - cleaned up the manifest code, added tests and permit setting a section raw - added the ability to filter hooks via master and node roles - updated the documentation to reflect the changes - cleaned up some of the vetting issues
- extending the hooks to permit adding hooks per instancegroup as well - @note, instanceGroup are permitted to override the cluster wide one for ease of testing - updated the documentation to reflect the changes - on the journey tried to fix an go idioms such as import ordering, comments for global export etc - @question: v1alpha1 doesn't appear to have Subnet fields, are these different version being used anywhere?
… etcd cluster. The PR provides and optional flag to enable TLS the etcd cluster - cleaned up and fixed any formatting issues on the journey - added two new certificates (server/client) for etcd peers and a client certificate for kubeapi and others perhaps (perhaps calico?) - disabled the protokube service for nodes completely is not required; note this was first raised in kubernetes#3091, but figured it would be easier to place in here given the relation - updated protokube codebase to reflect the changes, removing the master option as its no longer required - added additional integretion tests for the protokube manifests; - note, still need to add documentation, but opening the PR to get feedback - one outstanding issue is the migration from http -> https for preexisting clusters, i'm gonna hit the coreos board to ask for the best options
…as [PR2381](kubernetes#2831) using the server.cert and server.key as testing grounds. This PR formalizes the options and generates a client certificate on their behalf (note, the server{.cert,key} can no longer be used post 1.7 as the certificate usage is checked i.e. it's not using a client cert). The users now only need to add anonymousAuth: false to enable secure api to kubelet. I'd like to make this default to all new builds i'm not sure where to place it. - updated the security.md to reflect the changes - issue a new client kubelet-api certificate used to secure authorize comms between api and kubelet - fixed any formatting issues i came across on the journey
Some cluster changes such as component config modifications are not picked up when performing updates (nodes are not marked as NEEDUPDATE). This change introduces the ability to: Include certain cluster specs within the node user data file (enableClusterSpecInUserData: true) Encode the cluster spec string before placing within the user data file (enableClusterSpecInUserData: true) The above flags default to false so shouldn't cause any changes to existing clusters.
KashifSaadat
pushed a commit
that referenced
this pull request
Jan 25, 2019
In case of increased I/O load, the 10sec timeout is not enough on small / heavily loaded systems thus I propose the 60sec. The kubelet timeout is 2m (120sec) by default to detect health problems. Secondly, the docker restart can load heavily the host OS even huge systems because of many pods initialization at the same time. Continuous dockerd restart loop - a deadlock of node - is observed. Thirdly, because of the forcibly closed sockets and the kernel TCP TIME_WAIT value, the TCP sockets are not usable immediately with a "restart", wait for FIN_TIMEOUT is necessary before start services. Workaround #1 for: kubernetes#5434
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Kubelet API (kubernetes#3125)
A while back options to permit secure kube-apiserver to kubelet api was PR2381 using the server.cert and server.key as testing grounds. This PR formalizes the options and generates a client certificate on their behalf (note, the server{.cert,key} can no longer be used post 1.7 as the certificate usage is checked i.e. it's not using a client cert). The users now only need to add anonymousAuth: false to enable secure api to kubelet. I'd like to make this default to all new builds i'm not sure where to place it.
Etcd TLS (kubernetes#3114)
The current implementation does not put any transport security on the etcd cluster. The PR provides and optional flag to enable TLS the etcd cluster
Configuration Detection (kubernetes#3120)
Related to kubernetes#3076
Some cluster changes such as component config modifications are not picked up when performing updates (nodes are not marked as NEEDUPDATE). This change introduces the ability to:
- Include certain cluster specs within the node user data file (enableClusterSpecInUserData: true)
- Encode the cluster spec string before placing within the user data file (enableClusterSpecInUserData: true)
- The above flags default to false so shouldn't cause any changes to existing clusters.
Cluster Hook Enhancement (kubernetes#3063)
The current implementation is presently limited to docker exec, without ordering or any bells and whistles. This PR extends the functionality of the hook spec by;
Cluster Inline File assets (kubernetes#3090)
The current implementation does not make it ease to fully customize nodes before kube install. This PR adds the ability to include file assets in the cluster and instaneGroup spec which can be consumed by nodeup. Allowing those whom need (i.e. me :-)) greater flexibilty around their nodes. @note, nothing is enforced, so unless you've specified anything everything is as the same
notes: In addition to this; need to look at the detecting the changes in the cluster and instance group spec. Think out loud perhaps using a last_known_configuration annotation, similar to kubernetes
Replace and Create Command (kubernetes#3090)
The current 'kops replace' fails if the resource does not exist, which is annoying if you want to use the feature to drive your CI. This PR adds a --create option to create any resource which does not exist. At the moment we limit this to instanceGroups only. I'd also like to see this command perhaps be renamed to kops apply?
Fixes