From dc6dbaafec75ee5d825033bc1aa04f6cf116937d Mon Sep 17 00:00:00 2001 From: Elad Ben-Israel Date: Thu, 6 Aug 2020 15:03:12 +0300 Subject: [PATCH] feat(eks): deprecate "kubectlEnabled: false" (#9454) When specifying `kubectlEnabled: false`, it _implicitly_ meant that the underlying resource behind the construct would be the stock `AWS::EKS::Cluster` resource instead of the custom resource used by default. This means that many new capabilities of EKS would not be supported (e.g. Fargate profiles). Clusters backed by the custom-resource have all the capabilities (and more) of clusters backed by `AWS::EKS::Cluster`. Therefore, we decided that going forward we are going to support only the custom-resource backed solution. To that end, after this change, defining an `eks.Cluster` with `kubectlEnabled: false` will throw an error with the following message: The "eks.Cluster" class no longer allows disabling kubectl support. As a temporary workaround, you can use the drop-in replacement class `eks.LegacyCluster` but bear in mind that this class will soon be removed and will no longer receive additional features or bugfixes. See https://github.com/aws/aws-cdk/issues/9332 for more details Resolves #9332 BREAKING CHANGE: The experimental `eks.Cluster` construct no longer supports setting `kubectlEnabled: false`. A temporary drop-in alternative is `eks.LegacyCluster`, but we have plans to completely remove support for it in an upcoming release since `eks.Cluster` has matured and should provide all the needed capabilities. Please comment on https://github.com/aws/aws-cdk/issues/9332 if there are use cases that are not supported by `eks.Cluster`. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license* --- packages/@aws-cdk/aws-eks/README.md | 66 +- packages/@aws-cdk/aws-eks/lib/cluster.ts | 277 ++++---- .../@aws-cdk/aws-eks/lib/fargate-profile.ts | 6 - packages/@aws-cdk/aws-eks/lib/index.ts | 1 + .../@aws-cdk/aws-eks/lib/legacy-cluster.ts | 449 +++++++++++++ .../@aws-cdk/aws-eks/lib/managed-nodegroup.ts | 9 +- .../integ.eks-cluster.kubectl-disabled.ts | 3 +- .../@aws-cdk/aws-eks/test/test.cluster.ts | 109 +--- .../@aws-cdk/aws-eks/test/test.fargate.ts | 14 - .../aws-eks/test/test.legacy-cluster.ts | 590 ++++++++++++++++++ .../@aws-cdk/aws-eks/test/test.nodegroup.ts | 4 +- 11 files changed, 1184 insertions(+), 344 deletions(-) create mode 100644 packages/@aws-cdk/aws-eks/lib/legacy-cluster.ts create mode 100644 packages/@aws-cdk/aws-eks/test/test.legacy-cluster.ts diff --git a/packages/@aws-cdk/aws-eks/README.md b/packages/@aws-cdk/aws-eks/README.md index b330a3eefe7ef..003167058532b 100644 --- a/packages/@aws-cdk/aws-eks/README.md +++ b/packages/@aws-cdk/aws-eks/README.md @@ -48,7 +48,7 @@ cluster.addResource('mypod', { ``` In order to interact with your cluster through `kubectl`, you can use the `aws -eks update-kubeconfig` [AWS CLI](https://docs.aws.amazon.com/cli/latest/reference/eks/update-kubeconfig.html) command +eks update-kubeconfig` [AWS CLI command](https://docs.aws.amazon.com/cli/latest/reference/eks/update-kubeconfig.html) to configure your local kubeconfig. The EKS module will define a CloudFormation output in your stack which contains @@ -411,8 +411,6 @@ Furthermore, when auto-scaling capacity is added to the cluster (through of the auto-scaling group will be automatically mapped to RBAC so nodes can connect to the cluster. No manual mapping is required any longer. -> NOTE: `cluster.awsAuth` will throw an error if your cluster is created with `kubectlEnabled: false`. - For example, let's say you want to grant an IAM user administrative privileges on your cluster: @@ -467,68 +465,6 @@ If you want to SSH into nodes in a private subnet, you should set up a bastion host in a public subnet. That setup is recommended, but is unfortunately beyond the scope of this documentation. -### kubectl Support - -When you create an Amazon EKS cluster, the IAM entity user or role, such as a -[federated user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers.html) -that creates the cluster, is automatically granted `system:masters` permissions -in the cluster's RBAC configuration. - -In order to allow programmatically defining **Kubernetes resources** in your AWS -CDK app and provisioning them through AWS CloudFormation, we will need to assume -this "masters" role every time we want to issue `kubectl` operations against your -cluster. - -At the moment, the [AWS::EKS::Cluster](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-cluster.html) -AWS CloudFormation resource does not support this behavior, so in order to -support "programmatic kubectl", such as applying manifests -and mapping IAM roles from within your CDK application, the Amazon EKS -construct library uses a custom resource for provisioning the cluster. -This custom resource is executed with an IAM role that we can then use -to issue `kubectl` commands. - -The default behavior of this library is to use this custom resource in order -to retain programmatic control over the cluster. In other words: to allow -you to define Kubernetes resources in your CDK code instead of having to -manage your Kubernetes applications through a separate system. - -One of the implications of this design is that, by default, the user who -provisioned the AWS CloudFormation stack (executed `cdk deploy`) will -not have administrative privileges on the EKS cluster. - -1. Additional resources will be synthesized into your template (the AWS Lambda - function, the role and policy). -2. As described in [Interacting with Your Cluster](#interacting-with-your-cluster), - if you wish to be able to manually interact with your cluster, you will need - to map an IAM role or user to the `system:masters` group. This can be either - done by specifying a `mastersRole` when the cluster is defined, calling - `cluster.awsAuth.addMastersRole` or explicitly mapping an IAM role or IAM user to the - relevant Kubernetes RBAC groups using `cluster.addRoleMapping` and/or - `cluster.addUserMapping`. - -If you wish to disable the programmatic kubectl behavior and use the standard -AWS::EKS::Cluster resource, you can specify `kubectlEnabled: false` when you define -the cluster: - -```ts -new eks.Cluster(this, 'cluster', { - kubectlEnabled: false -}); -``` - -**Take care**: a change in this property will cause the cluster to be destroyed -and a new cluster to be created. - -When kubectl is disabled, you should be aware of the following: - -1. When you log-in to your cluster, you don't need to specify `--role-arn` as - long as you are using the same user that created the cluster. -2. As described in the Amazon EKS User Guide, you will need to manually - edit the [aws-auth ConfigMap](https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html) - when you add capacity in order to map the IAM instance role to RBAC to allow nodes to join the cluster. -3. Any `eks.Cluster` APIs that depend on programmatic kubectl support will fail - with an error: `cluster.addResource`, `cluster.addChart`, `cluster.awsAuth`, `props.mastersRole`. - ### Helm Charts The `HelmChart` construct or `cluster.addChart` method can be used diff --git a/packages/@aws-cdk/aws-eks/lib/cluster.ts b/packages/@aws-cdk/aws-eks/lib/cluster.ts index 1ca926d07fc93..11f0031ac9524 100644 --- a/packages/@aws-cdk/aws-eks/lib/cluster.ts +++ b/packages/@aws-cdk/aws-eks/lib/cluster.ts @@ -8,7 +8,7 @@ import { CfnOutput, CfnResource, Construct, IResource, Resource, Stack, Tag, Tok import * as YAML from 'yaml'; import { AwsAuth } from './aws-auth'; import { clusterArnComponents, ClusterResource } from './cluster-resource'; -import { CfnCluster, CfnClusterProps } from './eks.generated'; +import { CfnClusterProps } from './eks.generated'; import { FargateProfile, FargateProfileOptions } from './fargate-profile'; import { HelmChart, HelmChartOptions } from './helm-chart'; import { KubernetesPatch } from './k8s-patch'; @@ -118,7 +118,7 @@ export interface ClusterAttributes { /** * Options for configuring an EKS cluster. */ -export interface ClusterOptions { +export interface CommonClusterOptions { /** * The VPC in which to create the Cluster. * @@ -169,6 +169,28 @@ export interface ClusterOptions { */ readonly version: KubernetesVersion; + /** + * Determines whether a CloudFormation output with the name of the cluster + * will be synthesized. + * + * @default false + */ + readonly outputClusterName?: boolean; + + /** + * Determines whether a CloudFormation output with the `aws eks + * update-kubeconfig` command will be synthesized. This command will include + * the cluster name and, if applicable, the ARN of the masters IAM role. + * + * @default true + */ + readonly outputConfigCommand?: boolean; +} + +/** + * Options for EKS clusters. + */ +export interface ClusterOptions extends CommonClusterOptions { /** * An IAM role that will be added to the `system:masters` Kubernetes RBAC * group. @@ -189,14 +211,6 @@ export interface ClusterOptions { */ readonly coreDnsComputeType?: CoreDnsComputeType; - /** - * Determines whether a CloudFormation output with the name of the cluster - * will be synthesized. - * - * @default false - */ - readonly outputClusterName?: boolean; - /** * Determines whether a CloudFormation output with the ARN of the "masters" * IAM role will be synthesized (if `mastersRole` is specified). @@ -206,17 +220,7 @@ export interface ClusterOptions { readonly outputMastersRoleArn?: boolean; /** - * Determines whether a CloudFormation output with the `aws eks - * update-kubeconfig` command will be synthesized. This command will include - * the cluster name and, if applicable, the ARN of the masters IAM role. - * - * @default true - */ - readonly outputConfigCommand?: boolean; - - /** - * Configure access to the Kubernetes API server endpoint. - * This feature is only available for kubectl enabled clusters, i.e `kubectlEnabled: true`. + * Configure access to the Kubernetes API server endpoint.. * * @see https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html * @@ -322,31 +326,29 @@ export class EndpointAccess { } /** - * Configuration props for EKS clusters. + * Common configuration props for EKS clusters. */ export interface ClusterProps extends ClusterOptions { - /** - * Allows defining `kubectrl`-related resources on this cluster. - * - * If this is disabled, it will not be possible to use the following - * capabilities: - * - `addResource` - * - `addRoleMapping` - * - `addUserMapping` - * - `addMastersRole` and `props.mastersRole` - * - `endpointAccess` + * NOT SUPPORTED: We no longer allow disabling kubectl-support. Setting this + * option to `false` will throw an error. * - * If this is disabled, the cluster can only be managed by issuing `kubectl` - * commands from a session that uses the IAM role/user that created the - * account. + * To temporary allow you to retain existing clusters created with + * `kubectlEnabled: false`, you can use `eks.LegacyCluster` class, which is a + * drop-in replacement for `eks.Cluster` with `kubectlEnabled: false`. * - * _NOTE_: changing this value will destroy the cluster. This is because a - * managable cluster must be created using an AWS CloudFormation custom - * resource which executes with an IAM role owned by the CDK app. + * Bear in mind that this is a temporary workaround. We have plans to remove + * `eks.LegacyCluster`. If you have a use case for using `eks.LegacyCluster`, + * please add a comment here https://github.com/aws/aws-cdk/issues/9332 and + * let us know so we can make sure to continue to support your use case with + * `eks.Cluster`. This issue also includes additional context into why this + * class is being removed. * + * @deprecated `eks.LegacyCluster` is __temporarily__ provided as a drop-in + * replacement until you are able to migrate to `eks.Cluster`. * - * @default true The cluster can be managed by the AWS CDK application. + * @see https://github.com/aws/aws-cdk/issues/9332 + * @default true */ readonly kubectlEnabled?: boolean; @@ -486,12 +488,6 @@ export class Cluster extends Resource implements ICluster { */ public readonly role: iam.IRole; - /** - * Indicates if `kubectl` related operations can be performed on this cluster. - * - */ - public readonly kubectlEnabled: boolean; - /** * The auto scaling group that hosts the default capacity for this cluster. * This will be `undefined` if the `defaultCapacityType` is not `EC2` or @@ -517,7 +513,7 @@ export class Cluster extends Resource implements ICluster { * that manages it. If this cluster is not kubectl-enabled (i.e. uses the * stock `CfnCluster`), this is `undefined`. */ - private readonly _clusterResource?: ClusterResource; + private readonly _clusterResource: ClusterResource; /** * Manages the aws-auth config map. @@ -530,9 +526,9 @@ export class Cluster extends Resource implements ICluster { private _neuronDevicePlugin?: KubernetesResource; - private readonly endpointAccess?: EndpointAccess; + private readonly endpointAccess: EndpointAccess; - private readonly kubctlProviderSecurityGroup?: ec2.ISecurityGroup; + private readonly kubctlProviderSecurityGroup: ec2.ISecurityGroup; private readonly vpcSubnets: ec2.SubnetSelection[]; @@ -566,6 +562,14 @@ export class Cluster extends Resource implements ICluster { physicalName: props.clusterName, }); + if (props.kubectlEnabled === false) { + throw new Error( + 'The "eks.Cluster" class no longer allows disabling kubectl support. ' + + 'As a temporary workaround, you can use the drop-in replacement class `eks.LegacyCluster`, ' + + 'but bear in mind that this class will soon be removed and will no longer receive additional ' + + 'features or bugfixes. See https://github.com/aws/aws-cdk/issues/9332 for more details'); + } + const stack = Stack.of(this); this.vpc = props.vpc || new ec2.Vpc(this, 'DefaultVpc'); @@ -606,69 +610,56 @@ export class Cluster extends Resource implements ICluster { }, }; - let resource; - this.kubectlEnabled = props.kubectlEnabled === undefined ? true : props.kubectlEnabled; - if (this.kubectlEnabled) { - - this.endpointAccess = props.endpointAccess ?? EndpointAccess.PUBLIC_AND_PRIVATE; - this.kubectlProviderEnv = props.kubectlEnvironment; + this.endpointAccess = props.endpointAccess ?? EndpointAccess.PUBLIC_AND_PRIVATE; + this.kubectlProviderEnv = props.kubectlEnvironment; - if (this.endpointAccess._config.privateAccess && this.vpc instanceof ec2.Vpc) { - // validate VPC properties according to: https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html - if (!this.vpc.dnsHostnamesEnabled || !this.vpc.dnsSupportEnabled) { - throw new Error('Private endpoint access requires the VPC to have DNS support and DNS hostnames enabled. Use `enableDnsHostnames: true` and `enableDnsSupport: true` when creating the VPC.'); - } + if (this.endpointAccess._config.privateAccess && this.vpc instanceof ec2.Vpc) { + // validate VPC properties according to: https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html + if (!this.vpc.dnsHostnamesEnabled || !this.vpc.dnsSupportEnabled) { + throw new Error('Private endpoint access requires the VPC to have DNS support and DNS hostnames enabled. Use `enableDnsHostnames: true` and `enableDnsSupport: true` when creating the VPC.'); } + } - this.kubctlProviderSecurityGroup = new ec2.SecurityGroup(this, 'KubectlProviderSecurityGroup', { - vpc: this.vpc, - description: 'Comminication between KubectlProvider and EKS Control Plane', - }); - - // grant the kubectl provider access to the cluster control plane. - this.connections.allowFrom(this.kubctlProviderSecurityGroup, this.connections.defaultPort!); + this.kubctlProviderSecurityGroup = new ec2.SecurityGroup(this, 'KubectlProviderSecurityGroup', { + vpc: this.vpc, + description: 'Comminication between KubectlProvider and EKS Control Plane', + }); - resource = new ClusterResource(this, 'Resource', { - ...clusterProps, - endpointPrivateAccess: this.endpointAccess._config.privateAccess, - endpointPublicAccess: this.endpointAccess._config.publicAccess, - publicAccessCidrs: this.endpointAccess._config.publicCidrs, - }); - this._clusterResource = resource; - - // the security group and vpc must exist in order to properly delete the cluster (since we run `kubectl delete`). - // this ensures that. - this._clusterResource.node.addDependency(this.kubctlProviderSecurityGroup, this.vpc); - - // see https://github.com/aws/aws-cdk/issues/9027 - this._clusterResource.creationRole.addToPolicy(new iam.PolicyStatement({ - actions: ['ec2:DescribeVpcs'], - resources: [ stack.formatArn({ - service: 'ec2', - resource: 'vpc', - resourceName: this.vpc.vpcId, - })], - })); - - // we use an SSM parameter as a barrier because it's free and fast. - this._kubectlReadyBarrier = new CfnResource(this, 'KubectlReadyBarrier', { - type: 'AWS::SSM::Parameter', - properties: { - Type: 'String', - Value: 'aws:cdk:eks:kubectl-ready', - }, - }); + // grant the kubectl provider access to the cluster control plane. + this.connections.allowFrom(this.kubctlProviderSecurityGroup, this.connections.defaultPort!); - // add the cluster resource itself as a dependency of the barrier - this._kubectlReadyBarrier.node.addDependency(this._clusterResource); - } else { + const resource = this._clusterResource = new ClusterResource(this, 'Resource', { + ...clusterProps, + endpointPrivateAccess: this.endpointAccess._config.privateAccess, + endpointPublicAccess: this.endpointAccess._config.publicAccess, + publicAccessCidrs: this.endpointAccess._config.publicCidrs, + }); - if (props.endpointAccess) { - throw new Error("'endpointAccess' is not supported for clusters without kubectl enabled."); - } + // the security group and vpc must exist in order to properly delete the cluster (since we run `kubectl delete`). + // this ensures that. + this._clusterResource.node.addDependency(this.kubctlProviderSecurityGroup, this.vpc); + + // see https://github.com/aws/aws-cdk/issues/9027 + this._clusterResource.creationRole.addToPolicy(new iam.PolicyStatement({ + actions: ['ec2:DescribeVpcs'], + resources: [ stack.formatArn({ + service: 'ec2', + resource: 'vpc', + resourceName: this.vpc.vpcId, + })], + })); + + // we use an SSM parameter as a barrier because it's free and fast. + this._kubectlReadyBarrier = new CfnResource(this, 'KubectlReadyBarrier', { + type: 'AWS::SSM::Parameter', + properties: { + Type: 'String', + Value: 'aws:cdk:eks:kubectl-ready', + }, + }); - resource = new CfnCluster(this, 'Resource', clusterProps); - } + // add the cluster resource itself as a dependency of the barrier + this._kubectlReadyBarrier.node.addDependency(this._clusterResource); this.clusterName = this.getResourceNameAttribute(resource.ref); this.clusterArn = this.getResourceArnAttribute(resource.attrArn, clusterArnComponents(this.physicalName)); @@ -686,27 +677,22 @@ export class Cluster extends Resource implements ICluster { new CfnOutput(this, 'ClusterName', { value: this.clusterName }); } - if (!this.kubectlEnabled) { - if (props.mastersRole) { - throw new Error('Cannot specify a "masters" role if kubectl is disabled'); - } - } else { - // if an explicit role is not configured, define a masters role that can - // be assumed by anyone in the account (with sts:AssumeRole permissions of - // course) - const mastersRole = props.mastersRole ?? new iam.Role(this, 'MastersRole', { - assumedBy: new iam.AccountRootPrincipal(), - }); - - this.awsAuth.addMastersRole(mastersRole); + // if an explicit role is not configured, define a masters role that can + // be assumed by anyone in the account (with sts:AssumeRole permissions of + // course) + const mastersRole = props.mastersRole ?? new iam.Role(this, 'MastersRole', { + assumedBy: new iam.AccountRootPrincipal(), + }); - if (props.outputMastersRoleArn) { - new CfnOutput(this, 'MastersRoleArn', { value: mastersRole.roleArn }); - } + // map the IAM role to the `system:masters` group. + this.awsAuth.addMastersRole(mastersRole); - commonCommandOptions.push(`--role-arn ${mastersRole.roleArn}`); + if (props.outputMastersRoleArn) { + new CfnOutput(this, 'MastersRoleArn', { value: mastersRole.roleArn }); } + commonCommandOptions.push(`--role-arn ${mastersRole.roleArn}`); + // allocate default capacity if non-zero (or default). const minCapacity = props.defaultCapacity === undefined ? DEFAULT_CAPACITY_COUNT : props.defaultCapacity; if (minCapacity > 0) { @@ -725,9 +711,7 @@ export class Cluster extends Resource implements ICluster { new CfnOutput(this, 'GetTokenCommand', { value: `${getTokenCommandPrefix} ${postfix}` }); } - if (this.kubectlEnabled) { - this.defineCoreDnsComputeType(props.coreDnsComputeType ?? CoreDnsComputeType.EC2); - } + this.defineCoreDnsComputeType(props.coreDnsComputeType ?? CoreDnsComputeType.EC2); } /** @@ -846,14 +830,10 @@ export class Cluster extends Resource implements ICluster { applyToLaunchedInstances: true, }); - if (options.mapRole === true && !this.kubectlEnabled) { - throw new Error('Cannot map instance IAM role to RBAC if kubectl is disabled for the cluster'); - } - // do not attempt to map the role if `kubectl` is not enabled for this // cluster or if `mapRole` is set to false. By default this should happen. const mapRole = options.mapRole === undefined ? true : options.mapRole; - if (mapRole && this.kubectlEnabled) { + if (mapRole) { // see https://docs.aws.amazon.com/en_us/eks/latest/userguide/add-user-role.html this.awsAuth.addRoleMapping(autoScalingGroup.role, { username: 'system:node:{{EC2PrivateDNSName}}', @@ -871,7 +851,7 @@ export class Cluster extends Resource implements ICluster { } // if this is an ASG with spot instances, install the spot interrupt handler (only if kubectl is enabled). - if (autoScalingGroup.spotPrice && this.kubectlEnabled) { + if (autoScalingGroup.spotPrice) { this.addSpotInterruptHandler(); } } @@ -880,10 +860,6 @@ export class Cluster extends Resource implements ICluster { * Lazily creates the AwsAuth resource, which manages AWS authentication mapping. */ public get awsAuth() { - if (!this.kubectlEnabled) { - throw new Error('Cannot define aws-auth mappings if kubectl is disabled'); - } - if (!this._awsAuth) { this._awsAuth = new AwsAuth(this, 'AwsAuth', { cluster: this }); } @@ -899,10 +875,6 @@ export class Cluster extends Resource implements ICluster { * @attribute */ public get clusterOpenIdConnectIssuerUrl(): string { - if (!this._clusterResource) { - throw new Error('unable to obtain OpenID Connect issuer URL. Cluster must be kubectl-enabled'); - } - return this._clusterResource.attrOpenIdConnectIssuerUrl; } @@ -914,10 +886,6 @@ export class Cluster extends Resource implements ICluster { * @attribute */ public get clusterOpenIdConnectIssuer(): string { - if (!this._clusterResource) { - throw new Error('unable to obtain OpenID Connect issuer. Cluster must be kubectl-enabled'); - } - return this._clusterResource.attrOpenIdConnectIssuer; } @@ -928,10 +896,6 @@ export class Cluster extends Resource implements ICluster { * A provider will only be defined if this property is accessed (lazy initialization). */ public get openIdConnectProvider() { - if (!this.kubectlEnabled) { - throw new Error('Cannot specify a OpenID Connect Provider if kubectl is disabled'); - } - if (!this._openIdConnectProvider) { this._openIdConnectProvider = new iam.OpenIdConnectProvider(this, 'OpenIdConnectProvider', { url: this.clusterOpenIdConnectIssuerUrl, @@ -956,7 +920,6 @@ export class Cluster extends Resource implements ICluster { * @param id logical id of this manifest * @param manifest a list of Kubernetes resource specifications * @returns a `KubernetesResource` object. - * @throws If `kubectlEnabled` is `false` */ public addResource(id: string, ...manifest: any[]) { return new KubernetesResource(this, `manifest-${id}`, { cluster: this, manifest }); @@ -968,7 +931,6 @@ export class Cluster extends Resource implements ICluster { * @param id logical id of this chart. * @param options options of this chart. * @returns a `HelmChart` object - * @throws If `kubectlEnabled` is `false` */ public addChart(id: string, options: HelmChartOptions) { return new HelmChart(this, `chart-${id}`, { cluster: this, ...options }); @@ -1010,10 +972,6 @@ export class Cluster extends Resource implements ICluster { * @internal */ public get _kubectlCreationRole() { - if (!this._clusterResource) { - throw new Error('Unable to perform this operation since kubectl is not enabled for this cluster'); - } - return this._clusterResource.creationRole; } @@ -1050,10 +1008,6 @@ export class Cluster extends Resource implements ICluster { public _attachKubectlResourceScope(resourceScope: Construct): KubectlProvider { const uid = '@aws-cdk/aws-eks.KubectlProvider'; - if (!this._clusterResource) { - throw new Error('Unable to perform this operation since kubectl is not enabled for this cluster'); - } - // singleton let provider = this.stack.node.tryFindChild(uid) as KubectlProvider; if (!provider) { @@ -1063,11 +1017,6 @@ export class Cluster extends Resource implements ICluster { env: this.kubectlProviderEnv, }; - if (!this.endpointAccess) { - // this should have been set on cluster instantiation for kubectl enabled clusters - throw new Error("Expected 'endpointAccess' to be defined for kubectl enabled clusters"); - } - if (!this.endpointAccess._config.publicAccess) { // endpoint access is private only, we need to attach the // provider to the VPC so that it can access the cluster. @@ -1076,7 +1025,7 @@ export class Cluster extends Resource implements ICluster { vpc: this.vpc, // lambda can only be accociated with max 16 subnets and they all need to be private. vpcSubnets: {subnets: this.selectPrivateSubnets().slice(0, 16)}, - securityGroups: [this.kubctlProviderSecurityGroup!], + securityGroups: [this.kubctlProviderSecurityGroup], }; } @@ -1174,10 +1123,6 @@ export class Cluster extends Resource implements ICluster { * omitted/removed, since the cluster is created with the "ec2" compute type by default. */ private defineCoreDnsComputeType(type: CoreDnsComputeType) { - if (!this.kubectlEnabled) { - throw new Error('kubectl must be enabled in order to define the compute type for CoreDNS'); - } - // ec2 is the "built in" compute type of the cluster so if this is the // requested type we can simply omit the resource. since the resource's // `restorePatch` is configured to restore the value to "ec2" this means diff --git a/packages/@aws-cdk/aws-eks/lib/fargate-profile.ts b/packages/@aws-cdk/aws-eks/lib/fargate-profile.ts index 76a32929de190..428d96ae24b36 100644 --- a/packages/@aws-cdk/aws-eks/lib/fargate-profile.ts +++ b/packages/@aws-cdk/aws-eks/lib/fargate-profile.ts @@ -140,12 +140,6 @@ export class FargateProfile extends Construct implements ITaggable { constructor(scope: Construct, id: string, props: FargateProfileProps) { super(scope, id); - // currently the custom resource requires a role to assume when interacting with the cluster - // and we only have this role when kubectl is enabled. - if (!props.cluster.kubectlEnabled) { - throw new Error('adding Faregate Profiles to clusters without kubectl enabled is currently unsupported'); - } - const provider = ClusterResourceProvider.getOrCreate(this); this.podExecutionRole = props.podExecutionRole ?? new iam.Role(this, 'PodExecutionRole', { diff --git a/packages/@aws-cdk/aws-eks/lib/index.ts b/packages/@aws-cdk/aws-eks/lib/index.ts index 5e1009d98eec7..5773e46ffc2bc 100644 --- a/packages/@aws-cdk/aws-eks/lib/index.ts +++ b/packages/@aws-cdk/aws-eks/lib/index.ts @@ -1,6 +1,7 @@ export * from './aws-auth'; export * from './aws-auth-mapping'; export * from './cluster'; +export * from './legacy-cluster'; export * from './eks.generated'; export * from './fargate-profile'; export * from './helm-chart'; diff --git a/packages/@aws-cdk/aws-eks/lib/legacy-cluster.ts b/packages/@aws-cdk/aws-eks/lib/legacy-cluster.ts new file mode 100644 index 0000000000000..3e2ba0feb9abd --- /dev/null +++ b/packages/@aws-cdk/aws-eks/lib/legacy-cluster.ts @@ -0,0 +1,449 @@ +import * as autoscaling from '@aws-cdk/aws-autoscaling'; +import * as ec2 from '@aws-cdk/aws-ec2'; +import * as iam from '@aws-cdk/aws-iam'; +import * as ssm from '@aws-cdk/aws-ssm'; +import { CfnOutput, Construct, Resource, Stack, Tag, Token } from '@aws-cdk/core'; +import { ICluster, ClusterAttributes, KubernetesVersion, NodeType, DefaultCapacityType, EksOptimizedImage, CapacityOptions, MachineImageType, AutoScalingGroupOptions, CommonClusterOptions } from './cluster'; +import { clusterArnComponents } from './cluster-resource'; +import { CfnCluster, CfnClusterProps } from './eks.generated'; +import { Nodegroup, NodegroupOptions } from './managed-nodegroup'; +import { renderAmazonLinuxUserData, renderBottlerocketUserData } from './user-data'; + +// defaults are based on https://eksctl.io +const DEFAULT_CAPACITY_COUNT = 2; +const DEFAULT_CAPACITY_TYPE = ec2.InstanceType.of(ec2.InstanceClass.M5, ec2.InstanceSize.LARGE); + +/** + * Common configuration props for EKS clusters. + */ +export interface LegacyClusterProps extends CommonClusterOptions { + /** + * Number of instances to allocate as an initial capacity for this cluster. + * Instance type can be configured through `defaultCapacityInstanceType`, + * which defaults to `m5.large`. + * + * Use `cluster.addCapacity` to add additional customized capacity. Set this + * to `0` is you wish to avoid the initial capacity allocation. + * + * @default 2 + */ + readonly defaultCapacity?: number; + + /** + * The instance type to use for the default capacity. This will only be taken + * into account if `defaultCapacity` is > 0. + * + * @default m5.large + */ + readonly defaultCapacityInstance?: ec2.InstanceType; + + /** + * The default capacity type for the cluster. + * + * @default NODEGROUP + */ + readonly defaultCapacityType?: DefaultCapacityType; +} + +/** + * A Cluster represents a managed Kubernetes Service (EKS) + * + * This is a fully managed cluster of API Servers (control-plane) + * The user is still required to create the worker nodes. + * + * @resource AWS::EKS::Cluster + */ +export class LegacyCluster extends Resource implements ICluster { + /** + * Import an existing cluster + * + * @param scope the construct scope, in most cases 'this' + * @param id the id or name to import as + * @param attrs the cluster properties to use for importing information + */ + public static fromClusterAttributes(scope: Construct, id: string, attrs: ClusterAttributes): ICluster { + return new ImportedCluster(scope, id, attrs); + } + + /** + * The VPC in which this Cluster was created + */ + public readonly vpc: ec2.IVpc; + + /** + * The Name of the created EKS Cluster + */ + public readonly clusterName: string; + + /** + * The AWS generated ARN for the Cluster resource + * + * @example arn:aws:eks:us-west-2:666666666666:cluster/prod + */ + public readonly clusterArn: string; + + /** + * The endpoint URL for the Cluster + * + * This is the URL inside the kubeconfig file to use with kubectl + * + * @example https://5E1D0CEXAMPLEA591B746AFC5AB30262.yl4.us-west-2.eks.amazonaws.com + */ + public readonly clusterEndpoint: string; + + /** + * The certificate-authority-data for your cluster. + */ + public readonly clusterCertificateAuthorityData: string; + + /** + * The cluster security group that was created by Amazon EKS for the cluster. + */ + public readonly clusterSecurityGroupId: string; + + /** + * Amazon Resource Name (ARN) or alias of the customer master key (CMK). + */ + public readonly clusterEncryptionConfigKeyArn: string; + + /** + * Manages connection rules (Security Group Rules) for the cluster + * + * @type {ec2.Connections} + * @memberof Cluster + */ + public readonly connections: ec2.Connections; + + /** + * IAM role assumed by the EKS Control Plane + */ + public readonly role: iam.IRole; + + /** + * The auto scaling group that hosts the default capacity for this cluster. + * This will be `undefined` if the `defaultCapacityType` is not `EC2` or + * `defaultCapacityType` is `EC2` but default capacity is set to 0. + */ + public readonly defaultCapacity?: autoscaling.AutoScalingGroup; + + /** + * The node group that hosts the default capacity for this cluster. + * This will be `undefined` if the `defaultCapacityType` is `EC2` or + * `defaultCapacityType` is `NODEGROUP` but default capacity is set to 0. + */ + public readonly defaultNodegroup?: Nodegroup; + + private readonly version: KubernetesVersion; + + /** + * Initiates an EKS Cluster with the supplied arguments + * + * @param scope a Construct, most likely a cdk.Stack created + * @param name the name of the Construct to create + * @param props properties in the IClusterProps interface + */ + constructor(scope: Construct, id: string, props: LegacyClusterProps) { + super(scope, id, { + physicalName: props.clusterName, + }); + + const stack = Stack.of(this); + + this.vpc = props.vpc || new ec2.Vpc(this, 'DefaultVpc'); + this.version = props.version; + + this.tagSubnets(); + + // this is the role used by EKS when interacting with AWS resources + this.role = props.role || new iam.Role(this, 'Role', { + assumedBy: new iam.ServicePrincipal('eks.amazonaws.com'), + managedPolicies: [ + iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonEKSClusterPolicy'), + ], + }); + + const securityGroup = props.securityGroup || new ec2.SecurityGroup(this, 'ControlPlaneSecurityGroup', { + vpc: this.vpc, + description: 'EKS Control Plane Security Group', + }); + + this.connections = new ec2.Connections({ + securityGroups: [securityGroup], + defaultPort: ec2.Port.tcp(443), // Control Plane has an HTTPS API + }); + + // Get subnetIds for all selected subnets + const placements = props.vpcSubnets || [{ subnetType: ec2.SubnetType.PUBLIC }, { subnetType: ec2.SubnetType.PRIVATE }]; + const subnetIds = [...new Set(Array().concat(...placements.map(s => this.vpc.selectSubnets(s).subnetIds)))]; + + const clusterProps: CfnClusterProps = { + name: this.physicalName, + roleArn: this.role.roleArn, + version: props.version.version, + resourcesVpcConfig: { + securityGroupIds: [securityGroup.securityGroupId], + subnetIds, + }, + }; + + const resource = new CfnCluster(this, 'Resource', clusterProps); + + this.clusterName = this.getResourceNameAttribute(resource.ref); + this.clusterArn = this.getResourceArnAttribute(resource.attrArn, clusterArnComponents(this.physicalName)); + + this.clusterEndpoint = resource.attrEndpoint; + this.clusterCertificateAuthorityData = resource.attrCertificateAuthorityData; + this.clusterSecurityGroupId = resource.attrClusterSecurityGroupId; + this.clusterEncryptionConfigKeyArn = resource.attrEncryptionConfigKeyArn; + + const updateConfigCommandPrefix = `aws eks update-kubeconfig --name ${this.clusterName}`; + const getTokenCommandPrefix = `aws eks get-token --cluster-name ${this.clusterName}`; + const commonCommandOptions = [ `--region ${stack.region}` ]; + + if (props.outputClusterName) { + new CfnOutput(this, 'ClusterName', { value: this.clusterName }); + } + + // allocate default capacity if non-zero (or default). + const minCapacity = props.defaultCapacity === undefined ? DEFAULT_CAPACITY_COUNT : props.defaultCapacity; + if (minCapacity > 0) { + const instanceType = props.defaultCapacityInstance || DEFAULT_CAPACITY_TYPE; + this.defaultCapacity = props.defaultCapacityType === DefaultCapacityType.EC2 ? + this.addCapacity('DefaultCapacity', { instanceType, minCapacity }) : undefined; + + this.defaultNodegroup = props.defaultCapacityType !== DefaultCapacityType.EC2 ? + this.addNodegroup('DefaultCapacity', { instanceType, minSize: minCapacity }) : undefined; + } + + const outputConfigCommand = props.outputConfigCommand === undefined ? true : props.outputConfigCommand; + if (outputConfigCommand) { + const postfix = commonCommandOptions.join(' '); + new CfnOutput(this, 'ConfigCommand', { value: `${updateConfigCommandPrefix} ${postfix}` }); + new CfnOutput(this, 'GetTokenCommand', { value: `${getTokenCommandPrefix} ${postfix}` }); + } + } + + /** + * Add nodes to this EKS cluster + * + * The nodes will automatically be configured with the right VPC and AMI + * for the instance type and Kubernetes version. + * + * Spot instances will be labeled `lifecycle=Ec2Spot` and tainted with `PreferNoSchedule`. + * If kubectl is enabled, the + * [spot interrupt handler](https://github.com/awslabs/ec2-spot-labs/tree/master/ec2-spot-eks-solution/spot-termination-handler) + * daemon will be installed on all spot instances to handle + * [EC2 Spot Instance Termination Notices](https://aws.amazon.com/blogs/aws/new-ec2-spot-instance-termination-notices/). + */ + public addCapacity(id: string, options: CapacityOptions): autoscaling.AutoScalingGroup { + if (options.machineImageType === MachineImageType.BOTTLEROCKET && options.bootstrapOptions !== undefined ) { + throw new Error('bootstrapOptions is not supported for Bottlerocket'); + } + const asg = new autoscaling.AutoScalingGroup(this, id, { + ...options, + vpc: this.vpc, + machineImage: options.machineImageType === MachineImageType.BOTTLEROCKET ? + new BottleRocketImage() : + new EksOptimizedImage({ + nodeType: nodeTypeForInstanceType(options.instanceType), + kubernetesVersion: this.version.version, + }), + updateType: options.updateType || autoscaling.UpdateType.ROLLING_UPDATE, + instanceType: options.instanceType, + }); + + this.addAutoScalingGroup(asg, { + mapRole: options.mapRole, + bootstrapOptions: options.bootstrapOptions, + bootstrapEnabled: options.bootstrapEnabled, + machineImageType: options.machineImageType, + }); + + return asg; + } + + /** + * Add managed nodegroup to this Amazon EKS cluster + * + * This method will create a new managed nodegroup and add into the capacity. + * + * @see https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html + * @param id The ID of the nodegroup + * @param options options for creating a new nodegroup + */ + public addNodegroup(id: string, options?: NodegroupOptions): Nodegroup { + return new Nodegroup(this, `Nodegroup${id}`, { + cluster: this, + ...options, + }); + } + + /** + * Add compute capacity to this EKS cluster in the form of an AutoScalingGroup + * + * The AutoScalingGroup must be running an EKS-optimized AMI containing the + * /etc/eks/bootstrap.sh script. This method will configure Security Groups, + * add the right policies to the instance role, apply the right tags, and add + * the required user data to the instance's launch configuration. + * + * Spot instances will be labeled `lifecycle=Ec2Spot` and tainted with `PreferNoSchedule`. + * If kubectl is enabled, the + * [spot interrupt handler](https://github.com/awslabs/ec2-spot-labs/tree/master/ec2-spot-eks-solution/spot-termination-handler) + * daemon will be installed on all spot instances to handle + * [EC2 Spot Instance Termination Notices](https://aws.amazon.com/blogs/aws/new-ec2-spot-instance-termination-notices/). + * + * Prefer to use `addCapacity` if possible. + * + * @see https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html + * @param autoScalingGroup [disable-awslint:ref-via-interface] + * @param options options for adding auto scaling groups, like customizing the bootstrap script + */ + public addAutoScalingGroup(autoScalingGroup: autoscaling.AutoScalingGroup, options: AutoScalingGroupOptions) { + // self rules + autoScalingGroup.connections.allowInternally(ec2.Port.allTraffic()); + + // Cluster to:nodes rules + autoScalingGroup.connections.allowFrom(this, ec2.Port.tcp(443)); + autoScalingGroup.connections.allowFrom(this, ec2.Port.tcpRange(1025, 65535)); + + // Allow HTTPS from Nodes to Cluster + autoScalingGroup.connections.allowTo(this, ec2.Port.tcp(443)); + + // Allow all node outbound traffic + autoScalingGroup.connections.allowToAnyIpv4(ec2.Port.allTcp()); + autoScalingGroup.connections.allowToAnyIpv4(ec2.Port.allUdp()); + autoScalingGroup.connections.allowToAnyIpv4(ec2.Port.allIcmp()); + + const bootstrapEnabled = options.bootstrapEnabled !== undefined ? options.bootstrapEnabled : true; + if (options.bootstrapOptions && !bootstrapEnabled) { + throw new Error('Cannot specify "bootstrapOptions" if "bootstrapEnabled" is false'); + } + + if (bootstrapEnabled) { + const userData = options.machineImageType === MachineImageType.BOTTLEROCKET ? + renderBottlerocketUserData(this) : + renderAmazonLinuxUserData(this.clusterName, autoScalingGroup, options.bootstrapOptions); + autoScalingGroup.addUserData(...userData); + } + + autoScalingGroup.role.addManagedPolicy(iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonEKSWorkerNodePolicy')); + autoScalingGroup.role.addManagedPolicy(iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonEKS_CNI_Policy')); + autoScalingGroup.role.addManagedPolicy(iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonEC2ContainerRegistryReadOnly')); + + // EKS Required Tags + Tag.add(autoScalingGroup, `kubernetes.io/cluster/${this.clusterName}`, 'owned', { + applyToLaunchedInstances: true, + }); + + if (options.mapRole) { + throw new Error('Cannot map instance IAM role to RBAC if kubectl is disabled for the cluster'); + } + + // since we are not mapping the instance role to RBAC, synthesize an + // output so it can be pasted into `aws-auth-cm.yaml` + new CfnOutput(autoScalingGroup, 'InstanceRoleARN', { + value: autoScalingGroup.role.roleArn, + }); + } + + /** + * Opportunistically tag subnets with the required tags. + * + * If no subnets could be found (because this is an imported VPC), add a warning. + * + * @see https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html + */ + private tagSubnets() { + const tagAllSubnets = (type: string, subnets: ec2.ISubnet[], tag: string) => { + for (const subnet of subnets) { + // if this is not a concrete subnet, attach a construct warning + if (!ec2.Subnet.isVpcSubnet(subnet)) { + // message (if token): "could not auto-tag public/private subnet with tag..." + // message (if not token): "count not auto-tag public/private subnet xxxxx with tag..." + const subnetID = Token.isUnresolved(subnet.subnetId) ? '' : ` ${subnet.subnetId}`; + this.node.addWarning(`Could not auto-tag ${type} subnet${subnetID} with "${tag}=1", please remember to do this manually`); + continue; + } + + subnet.node.applyAspect(new Tag(tag, '1')); + } + }; + + // https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html + tagAllSubnets('private', this.vpc.privateSubnets, 'kubernetes.io/role/internal-elb'); + tagAllSubnets('public', this.vpc.publicSubnets, 'kubernetes.io/role/elb'); + } +} + +/** + * Import a cluster to use in another stack + */ +class ImportedCluster extends Resource implements ICluster { + public readonly vpc: ec2.IVpc; + public readonly clusterCertificateAuthorityData: string; + public readonly clusterSecurityGroupId: string; + public readonly clusterEncryptionConfigKeyArn: string; + public readonly clusterName: string; + public readonly clusterArn: string; + public readonly clusterEndpoint: string; + public readonly connections = new ec2.Connections(); + + constructor(scope: Construct, id: string, props: ClusterAttributes) { + super(scope, id); + + this.vpc = ec2.Vpc.fromVpcAttributes(this, 'VPC', props.vpc); + this.clusterName = props.clusterName; + this.clusterEndpoint = props.clusterEndpoint; + this.clusterArn = props.clusterArn; + this.clusterCertificateAuthorityData = props.clusterCertificateAuthorityData; + this.clusterSecurityGroupId = props.clusterSecurityGroupId; + this.clusterEncryptionConfigKeyArn = props.clusterEncryptionConfigKeyArn; + + let i = 1; + for (const sgProps of props.securityGroups) { + this.connections.addSecurityGroup(ec2.SecurityGroup.fromSecurityGroupId(this, `SecurityGroup${i}`, sgProps.securityGroupId)); + i++; + } + } +} + +/** + * Construct an Bottlerocket image from the latest AMI published in SSM + */ +class BottleRocketImage implements ec2.IMachineImage { + private readonly kubernetesVersion?: string; + + private readonly amiParameterName: string; + + /** + * Constructs a new instance of the BottleRocketImage class. + */ + public constructor() { + // only 1.15 is currently available + this.kubernetesVersion = '1.15'; + + // set the SSM parameter name + this.amiParameterName = `/aws/service/bottlerocket/aws-k8s-${this.kubernetesVersion}/x86_64/latest/image_id`; + } + + /** + * Return the correct image + */ + public getImage(scope: Construct): ec2.MachineImageConfig { + const ami = ssm.StringParameter.valueForStringParameter(scope, this.amiParameterName); + return { + imageId: ami, + osType: ec2.OperatingSystemType.LINUX, + userData: ec2.UserData.custom(''), + }; + } +} + +const GPU_INSTANCETYPES = ['p2', 'p3', 'g4']; +const INFERENTIA_INSTANCETYPES = ['inf1']; + +function nodeTypeForInstanceType(instanceType: ec2.InstanceType) { + return GPU_INSTANCETYPES.includes(instanceType.toString().substring(0, 2)) ? NodeType.GPU : + INFERENTIA_INSTANCETYPES.includes(instanceType.toString().substring(0, 4)) ? NodeType.INFERENTIA : + NodeType.STANDARD; +} diff --git a/packages/@aws-cdk/aws-eks/lib/managed-nodegroup.ts b/packages/@aws-cdk/aws-eks/lib/managed-nodegroup.ts index 3ab5287b9f782..db2130e01879d 100644 --- a/packages/@aws-cdk/aws-eks/lib/managed-nodegroup.ts +++ b/packages/@aws-cdk/aws-eks/lib/managed-nodegroup.ts @@ -1,7 +1,7 @@ import { InstanceType, ISecurityGroup, SubnetSelection } from '@aws-cdk/aws-ec2'; import { IRole, ManagedPolicy, Role, ServicePrincipal } from '@aws-cdk/aws-iam'; import { Construct, IResource, Resource } from '@aws-cdk/core'; -import { Cluster } from './cluster'; +import { Cluster, ICluster } from './cluster'; import { CfnNodegroup } from './eks.generated'; /** @@ -163,9 +163,8 @@ export interface NodegroupOptions { export interface NodegroupProps extends NodegroupOptions { /** * Cluster resource - * [disable-awslint:ref-via-interface]" */ - readonly cluster: Cluster; + readonly cluster: ICluster; } /** @@ -198,7 +197,7 @@ export class Nodegroup extends Resource implements INodegroup { * * @attribute ClusterName */ - public readonly cluster: Cluster; + public readonly cluster: ICluster; /** * IAM role of the instance profile for the nodegroup */ @@ -265,7 +264,7 @@ export class Nodegroup extends Resource implements INodegroup { // managed nodegroups update the `aws-auth` on creation, but we still need to track // its state for consistency. - if (this.cluster.kubectlEnabled) { + if (this.cluster instanceof Cluster) { // see https://docs.aws.amazon.com/en_us/eks/latest/userguide/add-user-role.html this.cluster.awsAuth.addRoleMapping(this.role, { username: 'system:node:{{EC2PrivateDNSName}}', diff --git a/packages/@aws-cdk/aws-eks/test/integ.eks-cluster.kubectl-disabled.ts b/packages/@aws-cdk/aws-eks/test/integ.eks-cluster.kubectl-disabled.ts index 2282772d42a7c..333cf0a77791a 100644 --- a/packages/@aws-cdk/aws-eks/test/integ.eks-cluster.kubectl-disabled.ts +++ b/packages/@aws-cdk/aws-eks/test/integ.eks-cluster.kubectl-disabled.ts @@ -9,9 +9,8 @@ class EksClusterStack extends TestStack { const vpc = new ec2.Vpc(this, 'VPC'); - const cluster = new eks.Cluster(this, 'EKSCluster', { + const cluster = new eks.LegacyCluster(this, 'EKSCluster', { vpc, - kubectlEnabled: false, defaultCapacity: 0, version: eks.KubernetesVersion.V1_16, }); diff --git a/packages/@aws-cdk/aws-eks/test/test.cluster.ts b/packages/@aws-cdk/aws-eks/test/test.cluster.ts index 52fe7dd7d4e68..aaf1d0a2de124 100644 --- a/packages/@aws-cdk/aws-eks/test/test.cluster.ts +++ b/packages/@aws-cdk/aws-eks/test/test.cluster.ts @@ -1,6 +1,6 @@ import * as fs from 'fs'; import * as path from 'path'; -import { countResources, expect, haveResource, haveResourceLike, not } from '@aws-cdk/assert'; +import { countResources, expect, haveResource, haveResourceLike } from '@aws-cdk/assert'; import * as ec2 from '@aws-cdk/aws-ec2'; import * as iam from '@aws-cdk/aws-iam'; import * as cdk from '@aws-cdk/core'; @@ -20,17 +20,22 @@ export = { const { stack, vpc } = testFixture(); // WHEN - new eks.Cluster(stack, 'Cluster', { vpc, kubectlEnabled: false, defaultCapacity: 0, version: CLUSTER_VERSION }); + new eks.Cluster(stack, 'Cluster', { vpc, defaultCapacity: 0, version: CLUSTER_VERSION }); // THEN - expect(stack).to(haveResourceLike('AWS::EKS::Cluster', { - ResourcesVpcConfig: { - SubnetIds: [ - { Ref: 'VPCPublicSubnet1SubnetB4246D30' }, - { Ref: 'VPCPublicSubnet2Subnet74179F39' }, - { Ref: 'VPCPrivateSubnet1Subnet8BCA10E0' }, - { Ref: 'VPCPrivateSubnet2SubnetCFCDAA7A' }, - ], + expect(stack).to(haveResourceLike('Custom::AWSCDK-EKS-Cluster', { + Config: { + roleArn: { 'Fn::GetAtt': [ 'ClusterRoleFA261979', 'Arn' ] }, + version: '1.16', + resourcesVpcConfig: { + securityGroupIds: [ { 'Fn::GetAtt': [ 'ClusterControlPlaneSecurityGroupD274242C', 'GroupId' ] } ], + subnetIds: [ + { Ref: 'VPCPublicSubnet1SubnetB4246D30' }, + { Ref: 'VPCPublicSubnet2Subnet74179F39' }, + { Ref: 'VPCPrivateSubnet1Subnet8BCA10E0' }, + { Ref: 'VPCPrivateSubnet2SubnetCFCDAA7A' }, + ], + }, }, })); @@ -44,7 +49,7 @@ export = { // WHEN const vpc = new ec2.Vpc(stack, 'VPC'); - new eks.Cluster(stack, 'Cluster', { vpc, kubectlEnabled: true, defaultCapacity: 0, version: CLUSTER_VERSION }); + new eks.Cluster(stack, 'Cluster', { vpc, defaultCapacity: 0, version: CLUSTER_VERSION }); const layer = KubectlLayer.getOrCreate(stack, {}); // THEN @@ -65,7 +70,7 @@ export = { // WHEN const vpc = new ec2.Vpc(stack, 'VPC'); - new eks.Cluster(stack, 'Cluster', { vpc, kubectlEnabled: true, defaultCapacity: 0, version: CLUSTER_VERSION }); + new eks.Cluster(stack, 'Cluster', { vpc, defaultCapacity: 0, version: CLUSTER_VERSION }); new KubectlLayer(stack, 'NewLayer'); const layer = KubectlLayer.getOrCreate(stack); @@ -160,7 +165,7 @@ export = { const { stack, vpc } = testFixture(); // WHEN - new eks.Cluster(stack, 'Cluster', { vpc, kubectlEnabled: false, defaultCapacity: 0, version: CLUSTER_VERSION }); + new eks.Cluster(stack, 'Cluster', { vpc, defaultCapacity: 0, version: CLUSTER_VERSION }); // THEN expect(stack).to(haveResource('AWS::EC2::Subnet', { @@ -180,7 +185,7 @@ export = { const { stack, vpc } = testFixture(); // WHEN - new eks.Cluster(stack, 'Cluster', { vpc, kubectlEnabled: false, defaultCapacity: 0, version: CLUSTER_VERSION }); + new eks.Cluster(stack, 'Cluster', { vpc, defaultCapacity: 0, version: CLUSTER_VERSION }); // THEN expect(stack).to(haveResource('AWS::EC2::Subnet', { @@ -200,7 +205,7 @@ export = { // GIVEN const { stack, vpc } = testFixture(); const cluster = new eks.Cluster(stack, 'Cluster', { - vpc, kubectlEnabled: false, + vpc, defaultCapacity: 0, version: CLUSTER_VERSION, }); @@ -214,7 +219,7 @@ export = { expect(stack).to(haveResource('AWS::AutoScaling::AutoScalingGroup', { Tags: [ { - Key: { 'Fn::Join': ['', ['kubernetes.io/cluster/', { Ref: 'ClusterEB0386A7' }]] }, + Key: { 'Fn::Join': ['', ['kubernetes.io/cluster/', { Ref: 'Cluster9EE0221C' }]] }, PropagateAtLaunch: true, Value: 'owned', }, @@ -266,7 +271,6 @@ export = { const { stack, vpc } = testFixture(); const cluster = new eks.Cluster(stack, 'Cluster', { vpc, - kubectlEnabled: false, defaultCapacity: 0, version: CLUSTER_VERSION, }); @@ -281,7 +285,7 @@ export = { expect(stack).to(haveResource('AWS::AutoScaling::AutoScalingGroup', { Tags: [ { - Key: { 'Fn::Join': ['', ['kubernetes.io/cluster/', { Ref: 'ClusterEB0386A7' }]] }, + Key: { 'Fn::Join': ['', ['kubernetes.io/cluster/', { Ref: 'Cluster9EE0221C' }]] }, PropagateAtLaunch: true, Value: 'owned', }, @@ -300,7 +304,6 @@ export = { const { stack, vpc } = testFixture(); const cluster = new eks.Cluster(stack, 'Cluster', { vpc, - kubectlEnabled: false, defaultCapacity: 0, version: CLUSTER_VERSION, }); @@ -319,7 +322,6 @@ export = { const stack2 = new cdk.Stack(app, 'stack2', { env: { region: 'us-east-1' } }); const cluster = new eks.Cluster(stack1, 'Cluster', { vpc, - kubectlEnabled: false, defaultCapacity: 0, version: CLUSTER_VERSION, }); @@ -344,7 +346,7 @@ export = { Outputs: { ClusterARN: { Value: { - 'Fn::ImportValue': 'Stack:ExportsOutputFnGetAttClusterEB0386A7Arn2F2E3C3F', + 'Fn::ImportValue': 'Stack:ExportsOutputFnGetAttCluster9EE0221CArn9E0B683E', }, }, }, @@ -352,26 +354,7 @@ export = { test.done(); }, - 'disabled features when kubectl is disabled'(test: Test) { - // GIVEN - const { stack, vpc } = testFixture(); - const cluster = new eks.Cluster(stack, 'Cluster', { - vpc, - kubectlEnabled: false, - defaultCapacity: 0, - version: CLUSTER_VERSION, - }); - - test.throws(() => cluster.awsAuth, /Cannot define aws-auth mappings if kubectl is disabled/); - test.throws(() => cluster.addResource('foo', {}), /Unable to perform this operation since kubectl is not enabled for this cluster/); - test.throws(() => cluster.addCapacity('boo', { instanceType: new ec2.InstanceType('r5d.24xlarge'), mapRole: true }), - /Cannot map instance IAM role to RBAC if kubectl is disabled for the cluster/); - test.throws(() => new eks.HelmChart(stack, 'MyChart', { cluster, chart: 'chart' }), /Unable to perform this operation since kubectl is not enabled for this cluster/); - test.throws(() => cluster.openIdConnectProvider, /Cannot specify a OpenID Connect Provider if kubectl is disabled/); - test.done(); - }, - - 'mastersRole can be used to map an IAM role to "system:masters" (required kubectl)'(test: Test) { + 'mastersRole can be used to map an IAM role to "system:masters"'(test: Test) { // GIVEN const { stack, vpc } = testFixture(); const role = new iam.Role(stack, 'role', { assumedBy: new iam.AnyPrincipal() }); @@ -475,7 +458,7 @@ export = { test.done(); }, - 'when kubectl is enabled (default) adding capacity will automatically map its IAM role'(test: Test) { + 'adding capacity will automatically map its IAM role'(test: Test) { // GIVEN const { stack, vpc } = testFixture(); const cluster = new eks.Cluster(stack, 'Cluster', { @@ -568,26 +551,6 @@ export = { test.done(); }, - 'addCapacity will *not* map the IAM role if kubectl is disabled'(test: Test) { - // GIVEN - const { stack, vpc } = testFixture(); - const cluster = new eks.Cluster(stack, 'Cluster', { - vpc, - kubectlEnabled: false, - defaultCapacity: 0, - version: CLUSTER_VERSION, - }); - - // WHEN - cluster.addCapacity('default', { - instanceType: new ec2.InstanceType('t2.nano'), - }); - - // THEN - expect(stack).to(not(haveResource(eks.KubernetesResource.RESOURCE_TYPE))); - test.done(); - }, - 'outputs': { 'aws eks update-kubeconfig is the only output synthesized by default'(test: Test) { // GIVEN @@ -763,7 +726,7 @@ export = { test.done(); }, - 'if kubectl is enabled, the interrupt handler is added'(test: Test) { + 'interrupt handler is added'(test: Test) { // GIVEN const { stack } = testFixtureNoVpc(); const cluster = new eks.Cluster(stack, 'Cluster', { defaultCapacity: 0, version: CLUSTER_VERSION }); @@ -806,26 +769,6 @@ export = { test.done(); }, - 'if kubectl is disabled, interrupt handler is not added'(test: Test) { - // GIVEN - const { stack } = testFixtureNoVpc(); - const cluster = new eks.Cluster(stack, 'Cluster', { - defaultCapacity: 0, - kubectlEnabled: false, - version: CLUSTER_VERSION, - }); - - // WHEN - cluster.addCapacity('MyCapcity', { - instanceType: new ec2.InstanceType('m3.xlargs'), - spotPrice: '0.01', - }); - - // THEN - expect(stack).notTo(haveResource(eks.KubernetesResource.RESOURCE_TYPE)); - test.done(); - }, - }, }, diff --git a/packages/@aws-cdk/aws-eks/test/test.fargate.ts b/packages/@aws-cdk/aws-eks/test/test.fargate.ts index faa6c61aef95d..40d1b507da019 100644 --- a/packages/@aws-cdk/aws-eks/test/test.fargate.ts +++ b/packages/@aws-cdk/aws-eks/test/test.fargate.ts @@ -341,20 +341,6 @@ export = { test.done(); }, - 'cannot be added to a cluster without kubectl enabled'(test: Test) { - // GIVEN - const stack = new Stack(); - const cluster = new eks.Cluster(stack, 'MyCluster', { kubectlEnabled: false, version: CLUSTER_VERSION }); - - // WHEN - test.throws(() => new eks.FargateProfile(stack, 'MyFargateProfile', { - cluster, - selectors: [ { namespace: 'default' } ], - }), /unsupported/); - - test.done(); - }, - 'allow cluster creation role to iam:PassRole on fargate pod execution role'(test: Test) { // GIVEN const stack = new Stack(); diff --git a/packages/@aws-cdk/aws-eks/test/test.legacy-cluster.ts b/packages/@aws-cdk/aws-eks/test/test.legacy-cluster.ts new file mode 100644 index 0000000000000..625f9f8950c32 --- /dev/null +++ b/packages/@aws-cdk/aws-eks/test/test.legacy-cluster.ts @@ -0,0 +1,590 @@ +import { expect, haveResource, haveResourceLike, not } from '@aws-cdk/assert'; +import * as ec2 from '@aws-cdk/aws-ec2'; +import * as iam from '@aws-cdk/aws-iam'; +import * as cdk from '@aws-cdk/core'; +import { Test } from 'nodeunit'; +import * as eks from '../lib'; +import { testFixture, testFixtureNoVpc } from './util'; + +/* eslint-disable max-len */ + +const CLUSTER_VERSION = eks.KubernetesVersion.V1_16; + +export = { + 'a default cluster spans all subnets'(test: Test) { + // GIVEN + const { stack, vpc } = testFixture(); + + // WHEN + new eks.LegacyCluster(stack, 'Cluster', { vpc, defaultCapacity: 0, version: CLUSTER_VERSION }); + + // THEN + expect(stack).to(haveResourceLike('AWS::EKS::Cluster', { + ResourcesVpcConfig: { + SubnetIds: [ + { Ref: 'VPCPublicSubnet1SubnetB4246D30' }, + { Ref: 'VPCPublicSubnet2Subnet74179F39' }, + { Ref: 'VPCPrivateSubnet1Subnet8BCA10E0' }, + { Ref: 'VPCPrivateSubnet2SubnetCFCDAA7A' }, + ], + }, + })); + + test.done(); + }, + + 'if "vpc" is not specified, vpc with default configuration will be created'(test: Test) { + // GIVEN + const { stack } = testFixtureNoVpc(); + + // WHEN + new eks.LegacyCluster(stack, 'cluster', { version: CLUSTER_VERSION }) ; + + // THEN + expect(stack).to(haveResource('AWS::EC2::VPC')); + test.done(); + }, + + 'default capacity': { + + 'x2 m5.large by default'(test: Test) { + // GIVEN + const { stack } = testFixtureNoVpc(); + + // WHEN + const cluster = new eks.LegacyCluster(stack, 'cluster', { version: CLUSTER_VERSION }); + + // THEN + test.ok(cluster.defaultNodegroup); + expect(stack).to(haveResource('AWS::EKS::Nodegroup', { + InstanceTypes: [ + 'm5.large', + ], + ScalingConfig: { + DesiredSize: 2, + MaxSize: 2, + MinSize: 2, + }, + })); + test.done(); + }, + + 'quantity and type can be customized'(test: Test) { + // GIVEN + const { stack } = testFixtureNoVpc(); + + // WHEN + const cluster = new eks.LegacyCluster(stack, 'cluster', { + defaultCapacity: 10, + defaultCapacityInstance: new ec2.InstanceType('m2.xlarge'), + version: CLUSTER_VERSION, + }); + + // THEN + test.ok(cluster.defaultNodegroup); + expect(stack).to(haveResource('AWS::EKS::Nodegroup', { + ScalingConfig: { + DesiredSize: 10, + MaxSize: 10, + MinSize: 10, + }, + })); + // expect(stack).to(haveResource('AWS::AutoScaling::LaunchConfiguration', { InstanceType: 'm2.xlarge' })); + test.done(); + }, + + 'defaultCapacity=0 will not allocate at all'(test: Test) { + // GIVEN + const { stack } = testFixtureNoVpc(); + + // WHEN + const cluster = new eks.LegacyCluster(stack, 'cluster', { defaultCapacity: 0, version: CLUSTER_VERSION }); + + // THEN + test.ok(!cluster.defaultCapacity); + expect(stack).notTo(haveResource('AWS::AutoScaling::AutoScalingGroup')); + expect(stack).notTo(haveResource('AWS::AutoScaling::LaunchConfiguration')); + test.done(); + }, + }, + + 'creating a cluster tags the private VPC subnets'(test: Test) { + // GIVEN + const { stack, vpc } = testFixture(); + + // WHEN + new eks.LegacyCluster(stack, 'Cluster', { vpc, defaultCapacity: 0, version: CLUSTER_VERSION }); + + // THEN + expect(stack).to(haveResource('AWS::EC2::Subnet', { + Tags: [ + { Key: 'aws-cdk:subnet-name', Value: 'Private' }, + { Key: 'aws-cdk:subnet-type', Value: 'Private' }, + { Key: 'kubernetes.io/role/internal-elb', Value: '1' }, + { Key: 'Name', Value: 'Stack/VPC/PrivateSubnet1' }, + ], + })); + + test.done(); + }, + + 'creating a cluster tags the public VPC subnets'(test: Test) { + // GIVEN + const { stack, vpc } = testFixture(); + + // WHEN + new eks.LegacyCluster(stack, 'Cluster', { vpc, defaultCapacity: 0, version: CLUSTER_VERSION }); + + // THEN + expect(stack).to(haveResource('AWS::EC2::Subnet', { + MapPublicIpOnLaunch: true, + Tags: [ + { Key: 'aws-cdk:subnet-name', Value: 'Public' }, + { Key: 'aws-cdk:subnet-type', Value: 'Public' }, + { Key: 'kubernetes.io/role/elb', Value: '1' }, + { Key: 'Name', Value: 'Stack/VPC/PublicSubnet1' }, + ], + })); + + test.done(); + }, + + 'adding capacity creates an ASG with tags'(test: Test) { + // GIVEN + const { stack, vpc } = testFixture(); + const cluster = new eks.LegacyCluster(stack, 'Cluster', { + vpc, + defaultCapacity: 0, + version: CLUSTER_VERSION, + }); + + // WHEN + cluster.addCapacity('Default', { + instanceType: new ec2.InstanceType('t2.medium'), + }); + + // THEN + expect(stack).to(haveResource('AWS::AutoScaling::AutoScalingGroup', { + Tags: [ + { + Key: { 'Fn::Join': ['', ['kubernetes.io/cluster/', { Ref: 'ClusterEB0386A7' }]] }, + PropagateAtLaunch: true, + Value: 'owned', + }, + { + Key: 'Name', + PropagateAtLaunch: true, + Value: 'Stack/Cluster/Default', + }, + ], + })); + + test.done(); + }, + + 'create nodegroup with existing role'(test: Test) { + // GIVEN + const { stack } = testFixtureNoVpc(); + + // WHEN + const cluster = new eks.LegacyCluster(stack, 'cluster', { + defaultCapacity: 10, + defaultCapacityInstance: new ec2.InstanceType('m2.xlarge'), + version: CLUSTER_VERSION, + }); + + const existingRole = new iam.Role(stack, 'ExistingRole', { + assumedBy: new iam.AccountRootPrincipal(), + }); + + new eks.Nodegroup(stack, 'Nodegroup', { + cluster, + nodeRole: existingRole, + }); + + // THEN + test.ok(cluster.defaultNodegroup); + expect(stack).to(haveResource('AWS::EKS::Nodegroup', { + ScalingConfig: { + DesiredSize: 10, + MaxSize: 10, + MinSize: 10, + }, + })); + test.done(); + }, + + 'adding bottlerocket capacity creates an ASG with tags'(test: Test) { + // GIVEN + const { stack, vpc } = testFixture(); + const cluster = new eks.LegacyCluster(stack, 'Cluster', { + vpc, + defaultCapacity: 0, + version: CLUSTER_VERSION, + }); + + // WHEN + cluster.addCapacity('Bottlerocket', { + instanceType: new ec2.InstanceType('t2.medium'), + machineImageType: eks.MachineImageType.BOTTLEROCKET, + }); + + // THEN + expect(stack).to(haveResource('AWS::AutoScaling::AutoScalingGroup', { + Tags: [ + { + Key: { 'Fn::Join': ['', ['kubernetes.io/cluster/', { Ref: 'ClusterEB0386A7' }]] }, + PropagateAtLaunch: true, + Value: 'owned', + }, + { + Key: 'Name', + PropagateAtLaunch: true, + Value: 'Stack/Cluster/Bottlerocket', + }, + ], + })); + test.done(); + }, + + 'adding bottlerocket capacity with bootstrapOptions throws error'(test: Test) { + // GIVEN + const { stack, vpc } = testFixture(); + const cluster = new eks.LegacyCluster(stack, 'Cluster', { + vpc, + defaultCapacity: 0, + version: CLUSTER_VERSION, + }); + + test.throws(() => cluster.addCapacity('Bottlerocket', { + instanceType: new ec2.InstanceType('t2.medium'), + machineImageType: eks.MachineImageType.BOTTLEROCKET, + bootstrapOptions: {}, + }), /bootstrapOptions is not supported for Bottlerocket/); + test.done(); + }, + + 'exercise export/import'(test: Test) { + // GIVEN + const { stack: stack1, vpc, app } = testFixture(); + const stack2 = new cdk.Stack(app, 'stack2', { env: { region: 'us-east-1' } }); + const cluster = new eks.LegacyCluster(stack1, 'Cluster', { + vpc, + defaultCapacity: 0, + version: CLUSTER_VERSION, + }); + + // WHEN + const imported = eks.LegacyCluster.fromClusterAttributes(stack2, 'Imported', { + clusterArn: cluster.clusterArn, + vpc: cluster.vpc, + clusterEndpoint: cluster.clusterEndpoint, + clusterName: cluster.clusterName, + securityGroups: cluster.connections.securityGroups, + clusterCertificateAuthorityData: cluster.clusterCertificateAuthorityData, + clusterSecurityGroupId: cluster.clusterSecurityGroupId, + clusterEncryptionConfigKeyArn: cluster.clusterEncryptionConfigKeyArn, + }); + + // this should cause an export/import + new cdk.CfnOutput(stack2, 'ClusterARN', { value: imported.clusterArn }); + + // THEN + expect(stack2).toMatch({ + Outputs: { + ClusterARN: { + Value: { + 'Fn::ImportValue': 'Stack:ExportsOutputFnGetAttClusterEB0386A7Arn2F2E3C3F', + }, + }, + }, + }); + test.done(); + }, + + 'disabled features when kubectl is disabled'(test: Test) { + // GIVEN + const { stack, vpc } = testFixture(); + const cluster = new eks.LegacyCluster(stack, 'Cluster', { + vpc, + defaultCapacity: 0, + version: CLUSTER_VERSION, + }); + + test.throws(() => cluster.addCapacity('boo', { instanceType: new ec2.InstanceType('r5d.24xlarge'), mapRole: true }), + /Cannot map instance IAM role to RBAC if kubectl is disabled for the cluster/); + test.done(); + }, + + 'addCapacity will *not* map the IAM role if mapRole is false'(test: Test) { + // GIVEN + const { stack, vpc } = testFixture(); + const cluster = new eks.LegacyCluster(stack, 'Cluster', { + vpc, + defaultCapacity: 0, + version: CLUSTER_VERSION, + }); + + // WHEN + cluster.addCapacity('default', { + instanceType: new ec2.InstanceType('t2.nano'), + mapRole: false, + }); + + // THEN + expect(stack).to(not(haveResource(eks.KubernetesResource.RESOURCE_TYPE))); + test.done(); + }, + + 'addCapacity will *not* map the IAM role if kubectl is disabled'(test: Test) { + // GIVEN + const { stack, vpc } = testFixture(); + const cluster = new eks.LegacyCluster(stack, 'Cluster', { + vpc, + defaultCapacity: 0, + version: CLUSTER_VERSION, + }); + + // WHEN + cluster.addCapacity('default', { + instanceType: new ec2.InstanceType('t2.nano'), + }); + + // THEN + expect(stack).to(not(haveResource(eks.KubernetesResource.RESOURCE_TYPE))); + test.done(); + }, + + 'outputs': { + 'aws eks update-kubeconfig is the only output synthesized by default'(test: Test) { + // GIVEN + const { app, stack } = testFixtureNoVpc(); + + // WHEN + new eks.LegacyCluster(stack, 'Cluster', { version: CLUSTER_VERSION }); + + // THEN + const assembly = app.synth(); + const template = assembly.getStackByName(stack.stackName).template; + test.deepEqual(template.Outputs, { + ClusterConfigCommand43AAE40F: { Value: { 'Fn::Join': ['', ['aws eks update-kubeconfig --name ', { Ref: 'ClusterEB0386A7' }, ' --region us-east-1']] } }, + ClusterGetTokenCommand06AE992E: { Value: { 'Fn::Join': ['', ['aws eks get-token --cluster-name ', { Ref: 'ClusterEB0386A7' }, ' --region us-east-1']] } }, + }); + test.done(); + }, + + 'if `outputConfigCommand=false` will disabled the output'(test: Test) { + // GIVEN + const { app, stack } = testFixtureNoVpc(); + + // WHEN + new eks.LegacyCluster(stack, 'Cluster', { + outputConfigCommand: false, + version: CLUSTER_VERSION, + }); + + // THEN + const assembly = app.synth(); + const template = assembly.getStackByName(stack.stackName).template; + test.ok(!template.Outputs); // no outputs + test.done(); + }, + + '`outputClusterName` can be used to synthesize an output with the cluster name'(test: Test) { + // GIVEN + const { app, stack } = testFixtureNoVpc(); + + // WHEN + new eks.LegacyCluster(stack, 'Cluster', { + outputConfigCommand: false, + outputClusterName: true, + version: CLUSTER_VERSION, + }); + + // THEN + const assembly = app.synth(); + const template = assembly.getStackByName(stack.stackName).template; + test.deepEqual(template.Outputs, { + ClusterClusterNameEB26049E: { Value: { Ref: 'ClusterEB0386A7' } }, + }); + test.done(); + }, + + 'boostrap user-data': { + + 'rendered by default for ASGs'(test: Test) { + // GIVEN + const { app, stack } = testFixtureNoVpc(); + const cluster = new eks.LegacyCluster(stack, 'Cluster', { defaultCapacity: 0, version: CLUSTER_VERSION }); + + // WHEN + cluster.addCapacity('MyCapcity', { instanceType: new ec2.InstanceType('m3.xlargs') }); + + // THEN + const template = app.synth().getStackByName(stack.stackName).template; + const userData = template.Resources.ClusterMyCapcityLaunchConfig58583345.Properties.UserData; + test.deepEqual(userData, { 'Fn::Base64': { 'Fn::Join': ['', ['#!/bin/bash\nset -o xtrace\n/etc/eks/bootstrap.sh ', { Ref: 'ClusterEB0386A7' }, ' --kubelet-extra-args "--node-labels lifecycle=OnDemand" --use-max-pods true\n/opt/aws/bin/cfn-signal --exit-code $? --stack Stack --resource ClusterMyCapcityASGD4CD8B97 --region us-east-1']] } }); + test.done(); + }, + + 'not rendered if bootstrap is disabled'(test: Test) { + // GIVEN + const { app, stack } = testFixtureNoVpc(); + const cluster = new eks.LegacyCluster(stack, 'Cluster', { defaultCapacity: 0, version: CLUSTER_VERSION }); + + // WHEN + cluster.addCapacity('MyCapcity', { + instanceType: new ec2.InstanceType('m3.xlargs'), + bootstrapEnabled: false, + }); + + // THEN + const template = app.synth().getStackByName(stack.stackName).template; + const userData = template.Resources.ClusterMyCapcityLaunchConfig58583345.Properties.UserData; + test.deepEqual(userData, { 'Fn::Base64': '#!/bin/bash' }); + test.done(); + }, + + // cursory test for options: see test.user-data.ts for full suite + 'bootstrap options'(test: Test) { + // GIVEN + const { app, stack } = testFixtureNoVpc(); + const cluster = new eks.LegacyCluster(stack, 'Cluster', { defaultCapacity: 0, version: CLUSTER_VERSION }); + + // WHEN + cluster.addCapacity('MyCapcity', { + instanceType: new ec2.InstanceType('m3.xlargs'), + bootstrapOptions: { + kubeletExtraArgs: '--node-labels FOO=42', + }, + }); + + // THEN + const template = app.synth().getStackByName(stack.stackName).template; + const userData = template.Resources.ClusterMyCapcityLaunchConfig58583345.Properties.UserData; + test.deepEqual(userData, { 'Fn::Base64': { 'Fn::Join': ['', ['#!/bin/bash\nset -o xtrace\n/etc/eks/bootstrap.sh ', { Ref: 'ClusterEB0386A7' }, ' --kubelet-extra-args "--node-labels lifecycle=OnDemand --node-labels FOO=42" --use-max-pods true\n/opt/aws/bin/cfn-signal --exit-code $? --stack Stack --resource ClusterMyCapcityASGD4CD8B97 --region us-east-1']] } }); + test.done(); + }, + + 'spot instances': { + + 'nodes labeled an tainted accordingly'(test: Test) { + // GIVEN + const { app, stack } = testFixtureNoVpc(); + const cluster = new eks.LegacyCluster(stack, 'Cluster', { defaultCapacity: 0, version: CLUSTER_VERSION }); + + // WHEN + cluster.addCapacity('MyCapcity', { + instanceType: new ec2.InstanceType('m3.xlargs'), + spotPrice: '0.01', + }); + + // THEN + const template = app.synth().getStackByName(stack.stackName).template; + const userData = template.Resources.ClusterMyCapcityLaunchConfig58583345.Properties.UserData; + test.deepEqual(userData, { 'Fn::Base64': { 'Fn::Join': ['', ['#!/bin/bash\nset -o xtrace\n/etc/eks/bootstrap.sh ', { Ref: 'ClusterEB0386A7' }, ' --kubelet-extra-args "--node-labels lifecycle=Ec2Spot --register-with-taints=spotInstance=true:PreferNoSchedule" --use-max-pods true\n/opt/aws/bin/cfn-signal --exit-code $? --stack Stack --resource ClusterMyCapcityASGD4CD8B97 --region us-east-1']] } }); + test.done(); + }, + + 'if kubectl is disabled, interrupt handler is not added'(test: Test) { + // GIVEN + const { stack } = testFixtureNoVpc(); + const cluster = new eks.LegacyCluster(stack, 'Cluster', { + defaultCapacity: 0, + version: CLUSTER_VERSION, + }); + + // WHEN + cluster.addCapacity('MyCapcity', { + instanceType: new ec2.InstanceType('m3.xlargs'), + spotPrice: '0.01', + }); + + // THEN + expect(stack).notTo(haveResource(eks.KubernetesResource.RESOURCE_TYPE)); + test.done(); + }, + + }, + + }, + + 'if bootstrap is disabled cannot specify options'(test: Test) { + // GIVEN + const { stack } = testFixtureNoVpc(); + const cluster = new eks.LegacyCluster(stack, 'Cluster', { defaultCapacity: 0, version: CLUSTER_VERSION }); + + // THEN + test.throws(() => cluster.addCapacity('MyCapcity', { + instanceType: new ec2.InstanceType('m3.xlargs'), + bootstrapEnabled: false, + bootstrapOptions: { awsApiRetryAttempts: 10 }, + }), /Cannot specify "bootstrapOptions" if "bootstrapEnabled" is false/); + test.done(); + }, + + 'EksOptimizedImage() with no nodeType always uses STANDARD with LATEST_KUBERNETES_VERSION'(test: Test) { + // GIVEN + const { app, stack } = testFixtureNoVpc(); + const LATEST_KUBERNETES_VERSION = '1.14'; + + // WHEN + new eks.EksOptimizedImage().getImage(stack); + + // THEN + const assembly = app.synth(); + const parameters = assembly.getStackByName(stack.stackName).template.Parameters; + test.ok(Object.entries(parameters).some( + ([k, v]) => k.startsWith('SsmParameterValueawsserviceeksoptimizedami') && + (v as any).Default.includes('/amazon-linux-2/'), + ), 'EKS STANDARD AMI should be in ssm parameters'); + test.ok(Object.entries(parameters).some( + ([k, v]) => k.startsWith('SsmParameterValueawsserviceeksoptimizedami') && + (v as any).Default.includes(LATEST_KUBERNETES_VERSION), + ), 'LATEST_KUBERNETES_VERSION should be in ssm parameters'); + test.done(); + }, + + 'EksOptimizedImage() with specific kubernetesVersion return correct AMI'(test: Test) { + // GIVEN + const { app, stack } = testFixtureNoVpc(); + + // WHEN + new eks.EksOptimizedImage({ kubernetesVersion: '1.15' }).getImage(stack); + + // THEN + const assembly = app.synth(); + const parameters = assembly.getStackByName(stack.stackName).template.Parameters; + test.ok(Object.entries(parameters).some( + ([k, v]) => k.startsWith('SsmParameterValueawsserviceeksoptimizedami') && + (v as any).Default.includes('/amazon-linux-2/'), + ), 'EKS STANDARD AMI should be in ssm parameters'); + test.ok(Object.entries(parameters).some( + ([k, v]) => k.startsWith('SsmParameterValueawsserviceeksoptimizedami') && + (v as any).Default.includes('/1.15/'), + ), 'kubernetesVersion should be in ssm parameters'); + test.done(); + }, + + 'EKS-Optimized AMI with GPU support when addCapacity'(test: Test) { + // GIVEN + const { app, stack } = testFixtureNoVpc(); + + // WHEN + new eks.LegacyCluster(stack, 'cluster', { + defaultCapacity: 0, + version: CLUSTER_VERSION, + }).addCapacity('GPUCapacity', { + instanceType: new ec2.InstanceType('g4dn.xlarge'), + }); + + // THEN + const assembly = app.synth(); + const parameters = assembly.getStackByName(stack.stackName).template.Parameters; + test.ok(Object.entries(parameters).some( + ([k, v]) => k.startsWith('SsmParameterValueawsserviceeksoptimizedami') && (v as any).Default.includes('amazon-linux-2-gpu'), + ), 'EKS AMI with GPU should be in ssm parameters'); + test.done(); + }, + }, +}; diff --git a/packages/@aws-cdk/aws-eks/test/test.nodegroup.ts b/packages/@aws-cdk/aws-eks/test/test.nodegroup.ts index f26f6248d4b91..4a0b4dbbee697 100644 --- a/packages/@aws-cdk/aws-eks/test/test.nodegroup.ts +++ b/packages/@aws-cdk/aws-eks/test/test.nodegroup.ts @@ -169,9 +169,8 @@ export = { const { stack, vpc } = testFixture(); // WHEN - const cluster = new eks.Cluster(stack, 'Cluster', { + const cluster = new eks.LegacyCluster(stack, 'Cluster', { vpc, - kubectlEnabled: false, defaultCapacity: 2, version: CLUSTER_VERSION, }); @@ -239,7 +238,6 @@ export = { const stack2 = new cdk.Stack(app, 'stack2', { env: { region: 'us-east-1' } }); const cluster = new eks.Cluster(stack1, 'Cluster', { vpc, - kubectlEnabled: false, defaultCapacity: 0, version: CLUSTER_VERSION, });