Metadata-Version: 2.1
Name: aws-cdk.aws-eks
Version: 1.63.0
Summary: The CDK Construct Library for AWS::EKS
Home-page: https://github.com/aws/aws-cdk
Author: Amazon Web Services
License: Apache-2.0
Project-URL: Source, https://github.com/aws/aws-cdk.git
Description: ## Amazon EKS Construct Library
        
        <!--BEGIN STABILITY BANNER-->---
        
        
        ![cfn-resources: Stable](https://img.shields.io/badge/cfn--resources-stable-success.svg?style=for-the-badge)
        
        > All classes with the `Cfn` prefix in this module ([CFN Resources](https://docs.aws.amazon.com/cdk/latest/guide/constructs.html#constructs_lib)) are always stable and safe to use.
        
        ![cdk-constructs: Experimental](https://img.shields.io/badge/cdk--constructs-experimental-important.svg?style=for-the-badge)
        
        > The APIs of higher level constructs in this module are experimental and under active development. They are subject to non-backward compatible changes or removal in any future version. These are not subject to the [Semantic Versioning](https://semver.org/) model and breaking changes will be announced in the release notes. This means that while you may use them, you may need to update your source code when upgrading to a newer version of this package.
        
        ---
        <!--END STABILITY BANNER-->
        
        This construct library allows you to define [Amazon Elastic Container Service
        for Kubernetes (EKS)](https://aws.amazon.com/eks/) clusters programmatically.
        This library also supports programmatically defining Kubernetes resource
        manifests within EKS clusters.
        
        This example defines an Amazon EKS cluster with the following configuration:
        
        * Managed nodegroup with 2x **m5.large** instances (this instance type suits most common use-cases, and is good value for money)
        * Dedicated VPC with default configuration (see [ec2.Vpc](https://docs.aws.amazon.com/cdk/api/latest/docs/aws-ec2-readme.html#vpc))
        * A Kubernetes pod with a container based on the [paulbouwer/hello-kubernetes](https://github.com/paulbouwer/hello-kubernetes) image.
        
        ```python
        # Example automatically generated. See https://github.com/aws/jsii/issues/826
        cluster = eks.Cluster(self, "hello-eks",
            version=eks.KubernetesVersion.V1_16
        )
        
        # apply a kubernetes manifest to the cluster
        cluster.add_manifest("mypod", {
            "api_version": "v1",
            "kind": "Pod",
            "metadata": {"name": "mypod"},
            "spec": {
                "containers": [{
                    "name": "hello",
                    "image": "paulbouwer/hello-kubernetes:1.5",
                    "ports": [{"container_port": 8080}]
                }
                ]
            }
        })
        ```
        
        In order to interact with your cluster through `kubectl`, you can use the `aws eks update-kubeconfig` [AWS CLI command](https://docs.aws.amazon.com/cli/latest/reference/eks/update-kubeconfig.html)
        to configure your local kubeconfig.
        
        The EKS module will define a CloudFormation output in your stack which contains
        the command to run. For example:
        
        ```
        Outputs:
        ClusterConfigCommand43AAE40F = aws eks update-kubeconfig --name cluster-xxxxx --role-arn arn:aws:iam::112233445566:role/yyyyy
        ```
        
        > The IAM role specified in this command is called the "**masters role**". This is
        > an IAM role that is associated with the `system:masters` [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
        > group and has super-user access to the cluster.
        >
        > You can specify this role using the `mastersRole` option, or otherwise a role will be
        > automatically created for you. This role can be assumed by anyone in the account with
        > `sts:AssumeRole` permissions for this role.
        
        Execute the `aws eks update-kubeconfig ...` command in your terminal to create a
        local kubeconfig:
        
        ```console
        $ aws eks update-kubeconfig --name cluster-xxxxx --role-arn arn:aws:iam::112233445566:role/yyyyy
        Added new context arn:aws:eks:rrrrr:112233445566:cluster/cluster-xxxxx to /home/boom/.kube/config
        ```
        
        And now you can simply use `kubectl`:
        
        ```console
        $ kubectl get all -n kube-system
        NAME                           READY   STATUS    RESTARTS   AGE
        pod/aws-node-fpmwv             1/1     Running   0          21m
        pod/aws-node-m9htf             1/1     Running   0          21m
        pod/coredns-5cb4fb54c7-q222j   1/1     Running   0          23m
        pod/coredns-5cb4fb54c7-v9nxx   1/1     Running   0          23m
        ...
        ```
        
        ### Endpoint Access
        
        You can configure the [cluster endpoint access](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html) by using the `endpointAccess` property:
        
        ```python
        # Example automatically generated. See https://github.com/aws/jsii/issues/826
        cluster = eks.Cluster(self, "hello-eks",
            version=eks.KubernetesVersion.V1_16,
            endpoint_access=eks.EndpointAccess.PRIVATE
        )
        ```
        
        The default value is `eks.EndpointAccess.PUBLIC_AND_PRIVATE`. Which means the cluster endpoint is accessible from outside of your VPC, and worker node traffic to the endpoint will stay within your VPC.
        
        ### Capacity
        
        By default, `eks.Cluster` is created with a managed nodegroup with x2 `m5.large` instances. You must specify the kubernetes version for the cluster with the `version` property.
        
        ```python
        # Example automatically generated. See https://github.com/aws/jsii/issues/826
        eks.Cluster(self, "cluster-two-m5-large",
            version=eks.KubernetesVersion.V1_16
        )
        ```
        
        To use the traditional self-managed Amazon EC2 instances instead, set `defaultCapacityType` to `DefaultCapacityType.EC2`
        
        ```python
        # Example automatically generated. See https://github.com/aws/jsii/issues/826
        cluster = eks.Cluster(self, "cluster-self-managed-ec2",
            default_capacity_type=eks.DefaultCapacityType.EC2,
            version=eks.KubernetesVersion.V1_16
        )
        ```
        
        The quantity and instance type for the default capacity can be specified through
        the `defaultCapacity` and `defaultCapacityInstance` props:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        eks.Cluster(self, "cluster",
            default_capacity=10,
            default_capacity_instance=ec2.InstanceType("m2.xlarge"),
            version=eks.KubernetesVersion.V1_16
        )
        ```
        
        To disable the default capacity, simply set `defaultCapacity` to `0`:
        
        ```python
        # Example automatically generated. See https://github.com/aws/jsii/issues/826
        eks.Cluster(self, "cluster-with-no-capacity",
            default_capacity=0,
            version=eks.KubernetesVersion.V1_16
        )
        ```
        
        The `cluster.defaultCapacity` property will reference the `AutoScalingGroup`
        resource for the default capacity. It will be `undefined` if `defaultCapacity`
        is set to `0` or `defaultCapacityType` is either `NODEGROUP` or undefined.
        
        And the `cluster.defaultNodegroup` property will reference the `Nodegroup`
        resource for the default capacity. It will be `undefined` if `defaultCapacity`
        is set to `0` or `defaultCapacityType` is `EC2`.
        
        You can add `AutoScalingGroup` resource as customized capacity through `cluster.addCapacity()` or
        `cluster.addAutoScalingGroup()`:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        cluster.add_capacity("frontend-nodes",
            instance_type=ec2.InstanceType("t2.medium"),
            min_capacity=3,
            vpc_subnets={"subnet_type": ec2.SubnetType.PUBLIC}
        )
        ```
        
        ### Managed Node Groups
        
        Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances)
        for Amazon EKS Kubernetes clusters. By default, `eks.Nodegroup` create a nodegroup with x2 `t3.medium` instances.
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        eks.Nodegroup(stack, "nodegroup", cluster=cluster)
        ```
        
        You can add customized node group through `cluster.addNodegroup()`:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        cluster.add_nodegroup("nodegroup",
            instance_type=ec2.InstanceType("m5.large"),
            min_size=4
        )
        ```
        
        #### Custom AMI and Launch Template support
        
        Specify the launch template for the nodegroup with your custom AMI. When using a custom AMI,
        Amazon EKS doesn't merge any user data. Rather, You are responsible for supplying the required
        bootstrap commands for nodes to join the cluster. In the following sample, `/ect/eks/bootstrap.sh` from the AMI will be used to bootstrap the node. See [Using a custom AMI](https://docs.aws.amazon.com/en_ca/eks/latest/userguide/launch-templates.html) for more details.
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        user_data = ec2.UserData.for_linux()
        user_data.add_commands("set -o xtrace", f"/etc/eks/bootstrap.sh {this.cluster.clusterName}")
        lt = ec2.CfnLaunchTemplate(self, "LaunchTemplate",
            launch_template_data={
                # specify your custom AMI below
                "image_id": image_id,
                "instance_type": ec2.InstanceType("t3.small").to_string(),
                "user_data": Fn.base64(user_data.render())
            }
        )
        self.cluster.add_nodegroup("extra-ng",
            launch_template={
                "id": lt.ref,
                "version": lt.attr_default_version_number
            }
        )
        ```
        
        ### ARM64 Support
        
        Instance types with `ARM64` architecture are supported in both managed nodegroup and self-managed capacity. Simply specify an ARM64 `instanceType` (such as `m6g.medium`), and the latest
        Amazon Linux 2 AMI for ARM64 will be automatically selected.
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        # create a cluster with a default managed nodegroup
        cluster = eks.Cluster(self, "Cluster",
            vpc=vpc,
            masters_role=masters_role,
            version=eks.KubernetesVersion.V1_17
        )
        
        # add a managed ARM64 nodegroup
        cluster.add_nodegroup("extra-ng-arm",
            instance_type=ec2.InstanceType("m6g.medium"),
            min_size=2
        )
        
        # add a self-managed ARM64 nodegroup
        cluster.add_capacity("self-ng-arm",
            instance_type=ec2.InstanceType("m6g.medium"),
            min_capacity=2
        )
        ```
        
        ### Fargate
        
        AWS Fargate is a technology that provides on-demand, right-sized compute
        capacity for containers. With AWS Fargate, you no longer have to provision,
        configure, or scale groups of virtual machines to run containers. This removes
        the need to choose server types, decide when to scale your node groups, or
        optimize cluster packing.
        
        You can control which pods start on Fargate and how they run with Fargate
        Profiles, which are defined as part of your Amazon EKS cluster.
        
        See [Fargate
        Considerations](https://docs.aws.amazon.com/eks/latest/userguide/fargate.html#fargate-considerations)
        in the AWS EKS User Guide.
        
        You can add Fargate Profiles to any EKS cluster defined in your CDK app
        through the `addFargateProfile()` method. The following example adds a profile
        that will match all pods from the "default" namespace:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        cluster.add_fargate_profile("MyProfile",
            selectors=[{"namespace": "default"}]
        )
        ```
        
        To create an EKS cluster that **only** uses Fargate capacity, you can use
        `FargateCluster`.
        
        The following code defines an Amazon EKS cluster without EC2 capacity and a default
        Fargate Profile that matches all pods from the "kube-system" and "default" namespaces. It is also configured to [run CoreDNS on Fargate](https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-coredns) through the `coreDnsComputeType` cluster option.
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        cluster = eks.FargateCluster(self, "MyCluster",
            version=eks.KubernetesVersion.V1_16
        )
        
        # apply k8s resources on this cluster
        cluster.add_manifest(...)
        ```
        
        **NOTE**: Classic Load Balancers and Network Load Balancers are not supported on
        pods running on Fargate. For ingress, we recommend that you use the [ALB Ingress
        Controller](https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html)
        on Amazon EKS (minimum version v1.1.4).
        
        ### Spot Capacity
        
        If `spotPrice` is specified, the capacity will be purchased from spot instances:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        cluster.add_capacity("spot",
            spot_price="0.1094",
            instance_type=ec2.InstanceType("t3.large"),
            max_capacity=10
        )
        ```
        
        Spot instance nodes will be labeled with `lifecycle=Ec2Spot` and tainted with `PreferNoSchedule`.
        
        The [AWS Node Termination Handler](https://github.com/aws/aws-node-termination-handler)
        DaemonSet will be installed from [
        Amazon EKS Helm chart repository
        ](https://github.com/aws/eks-charts/tree/master/stable/aws-node-termination-handler) on these nodes.
        The termination handler ensures that the Kubernetes control plane responds appropriately to events that
        can cause your EC2 instance to become unavailable, such as [EC2 maintenance events](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-instances-status-check_sched.html)
        and [EC2 Spot interruptions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-interruptions.html) and helps gracefully stop all pods running on spot nodes that are about to be
        terminated.
        
        Current version:
        
        | name       | version |
        |------------|---------|
        | Helm Chart | 0.9.5  |
        | App        | 1.7.0  |
        
        ### Bootstrapping
        
        When adding capacity, you can specify options for
        [/etc/eks/boostrap.sh](https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh)
        which is responsible for associating the node to the EKS cluster. For example,
        you can use `kubeletExtraArgs` to add custom node labels or taints.
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        # up to ten spot instances
        cluster.add_capacity("spot",
            instance_type=ec2.InstanceType("t3.large"),
            min_capacity=2,
            bootstrap_options={
                "kubelet_extra_args": "--node-labels foo=bar,goo=far",
                "aws_api_retry_attempts": 5
            }
        )
        ```
        
        To disable bootstrapping altogether (i.e. to fully customize user-data), set `bootstrapEnabled` to `false` when you add
        the capacity.
        
        ### Kubernetes Resources
        
        The `KubernetesManifest` construct or `cluster.addManifest` method can be used
        to apply Kubernetes resource manifests to this cluster.
        
        > When using `cluster.addManifest`, the manifest construct is defined within the cluster's stack scope. If the manifest contains
        > attributes from a different stack which depend on the cluster stack, a circular dependency will be created and you will get a synth time error.
        > To avoid this, directly use `new KubernetesManifest` to create the manifest in the scope of the other stack.
        
        The following examples will deploy the [paulbouwer/hello-kubernetes](https://github.com/paulbouwer/hello-kubernetes)
        service on the cluster:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        app_label = {"app": "hello-kubernetes"}
        
        deployment = {
            "api_version": "apps/v1",
            "kind": "Deployment",
            "metadata": {"name": "hello-kubernetes"},
            "spec": {
                "replicas": 3,
                "selector": {"match_labels": app_label},
                "template": {
                    "metadata": {"labels": app_label},
                    "spec": {
                        "containers": [{
                            "name": "hello-kubernetes",
                            "image": "paulbouwer/hello-kubernetes:1.5",
                            "ports": [{"container_port": 8080}]
                        }
                        ]
                    }
                }
            }
        }
        
        service = {
            "api_version": "v1",
            "kind": "Service",
            "metadata": {"name": "hello-kubernetes"},
            "spec": {
                "type": "LoadBalancer",
                "ports": [{"port": 80, "target_port": 8080}],
                "selector": app_label
            }
        }
        
        # option 1: use a construct
        KubernetesManifest(self, "hello-kub",
            cluster=cluster,
            manifest=[deployment, service]
        )
        
        # or, option2: use `addManifest`
        cluster.add_manifest("hello-kub", service, deployment)
        ```
        
        #### Kubectl Layer and Environment
        
        The resources are created in the cluster by running `kubectl apply` from a python lambda function. You can configure the environment of this function by specifying it at cluster instantiation. For example, this can useful in order to configure an http proxy:
        
        ```python
        # Example automatically generated. See https://github.com/aws/jsii/issues/826
        cluster = eks.Cluster(self, "hello-eks",
            version=eks.KubernetesVersion.V1_16,
            kubectl_environment={
                "http_proxy": "http://proxy.myproxy.com"
            }
        )
        ```
        
        By default, the `kubectl`, `helm` and `aws` commands used to operate the cluster
        are provided by an AWS Lambda Layer from the AWS Serverless Application
        in [aws-lambda-layer-kubectl](https://github.com/aws-samples/aws-lambda-layer-kubectl). In most cases this should be sufficient.
        
        You can provide a custom layer in case the default layer does not meet your
        needs or if the SAR app is not available in your region.
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        # custom build:
        layer = lambda_.LayerVersion(self, "KubectlLayer",
            code=lambda_.Code.from_asset(f"{__dirname}/layer.zip")
        )
        compatible_runtimes = ;
        ```
        
        Pass it to `kubectlLayer` when you create or import a cluster:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        cluster = eks.Cluster(self, "MyCluster",
            kubectl_layer=layer
        )
        
        # or
        cluster = eks.Cluster.from_cluster_attributes(self, "MyCluster",
            kubectl_layer=layer
        )
        ```
        
        > Instructions on how to build `layer.zip` can be found
        > [here](https://github.com/aws-samples/aws-lambda-layer-kubectl/blob/master/cdk/README.md).
        
        #### Adding resources from a URL
        
        The following example will deploy the resource manifest hosting on remote server:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        import js_yaml as yaml
        import sync_request as request
        
        manifest_url = "https://url/of/manifest.yaml"
        manifest = yaml.safe_load_all(request("GET", manifest_url).get_body())
        cluster.add_manifest("my-resource", (SpreadElement ...manifest
          manifest))
        ```
        
        Since Kubernetes resources are implemented as CloudFormation resources in the
        CDK. This means that if the resource is deleted from your code (or the stack is
        deleted), the next `cdk deploy` will issue a `kubectl delete` command and the
        Kubernetes resources will be deleted.
        
        #### Dependencies
        
        There are cases where Kubernetes resources must be deployed in a specific order.
        For example, you cannot define a resource in a Kubernetes namespace before the
        namespace was created.
        
        You can represent dependencies between `KubernetesManifest`s using
        `resource.node.addDependency()`:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        namespace = cluster.add_manifest("my-namespace",
            api_version="v1",
            kind="Namespace",
            metadata={"name": "my-app"}
        )
        
        service = cluster.add_manifest("my-service",
            metadata={
                "name": "myservice",
                "namespace": "my-app"
            },
            spec=
        )
        
        service.node.add_dependency(namespace)
        ```
        
        NOTE: when a `KubernetesManifest` includes multiple resources (either directly
        or through `cluster.addManifest()`) (e.g. `cluster.addManifest('foo', r1, r2, r3,...))`), these resources will be applied as a single manifest via `kubectl`
        and will be applied sequentially (the standard behavior in `kubectl`).
        
        ### Patching Kubernetes Resources
        
        The `KubernetesPatch` construct can be used to update existing kubernetes
        resources. The following example can be used to patch the `hello-kubernetes`
        deployment from the example above with 5 replicas.
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        KubernetesPatch(self, "hello-kub-deployment-label",
            cluster=cluster,
            resource_name="deployment/hello-kubernetes",
            apply_patch={"spec": {"replicas": 5}},
            restore_patch={"spec": {"replicas": 3}}
        )
        ```
        
        ### Querying Kubernetes Object Values
        
        The `KubernetesObjectValue` construct can be used to query for information about kubernetes objects,
        and use that as part of your CDK application.
        
        For example, you can fetch the address of a [`LoadBalancer`](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer) type service:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        # query the load balancer address
        my_service_address = KubernetesObjectValue(self, "LoadBalancerAttribute",
            cluster=cluster,
            resource_type="service",
            resource_name="my-service",
            json_path=".status.loadBalancer.ingress[0].hostname"
        )
        
        # pass the address to a lambda function
        proxy_function = lambda_.Function(self, "ProxyFunction", {
            (SpreadAssignment ...
              environment
              environment)
        },
            my_service_address=my_service_address.value
        )
        ```
        
        Specifically, since the above use-case is quite common, there is an easier way to access that information:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        load_balancer_address = cluster.get_service_load_balancer_address("my-service")
        ```
        
        ### Kubernetes Resources in Existing Clusters
        
        The Amazon EKS library allows defining Kubernetes resources such as [Kubernetes
        manifests](#kubernetes-resources) and [Helm charts](#helm-charts) on clusters
        that are not defined as part of your CDK app.
        
        First, you'll need to "import" a cluster to your CDK app. To do that, use the
        `eks.Cluster.fromClusterAttributes()` static method:
        
        ```python
        # Example automatically generated. See https://github.com/aws/jsii/issues/826
        cluster = eks.Cluster.from_cluster_attributes(self, "MyCluster",
            cluster_name="my-cluster-name",
            kubectl_role_arn="arn:aws:iam::1111111:role/iam-role-that-has-masters-access"
        )
        ```
        
        Then, you can use `addManifest` or `addHelmChart` to define resources inside
        your Kubernetes cluster. For example:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        cluster.add_manifest("Test",
            api_version="v1",
            kind="ConfigMap",
            metadata={
                "name": "myconfigmap"
            },
            data={
                "Key": "value",
                "Another": "123454"
            }
        )
        ```
        
        At the minimum, when importing clusters for `kubectl` management, you will need
        to specify:
        
        * `clusterName` - the name of the cluster.
        * `kubectlRoleArn` - the ARN of an IAM role mapped to the `system:masters` RBAC
          role. If the cluster you are importing was created using the AWS CDK, the
          CloudFormation stack has an output that includes an IAM role that can be used.
          Otherwise, you can create an IAM role and map it to `system:masters` manually.
          The trust policy of this role should include the the
          `arn:aws::iam::${accountId}:root` principal in order to allow the execution
          role of the kubectl resource to assume it.
        
        If the cluster is configured with private-only or private and restricted public
        Kubernetes [endpoint access](#endpoint-access), you must also specify:
        
        * `kubectlSecurityGroupId` - the ID of an EC2 security group that is allowed
          connections to the cluster's control security group.
        * `kubectlPrivateSubnetIds` - a list of private VPC subnets IDs that will be used
          to access the Kubernetes endpoint.
        
        ### AWS IAM Mapping
        
        As described in the [Amazon EKS User Guide](https://docs.aws.amazon.com/en_us/eks/latest/userguide/add-user-role.html),
        you can map AWS IAM users and roles to [Kubernetes Role-based access control (RBAC)](https://kubernetes.io/docs/reference/access-authn-authz/rbac).
        
        The Amazon EKS construct manages the **aws-auth ConfigMap** Kubernetes resource
        on your behalf and exposes an API through the `cluster.awsAuth` for mapping
        users, roles and accounts.
        
        Furthermore, when auto-scaling capacity is added to the cluster (through
        `cluster.addCapacity` or `cluster.addAutoScalingGroup`), the IAM instance role
        of the auto-scaling group will be automatically mapped to RBAC so nodes can
        connect to the cluster. No manual mapping is required any longer.
        
        For example, let's say you want to grant an IAM user administrative privileges
        on your cluster:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        admin_user = iam.User(self, "Admin")
        cluster.aws_auth.add_user_mapping(admin_user, groups=["system:masters"])
        ```
        
        A convenience method for mapping a role to the `system:masters` group is also available:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        cluster.aws_auth.add_masters_role(role)
        ```
        
        ### Cluster Security Group
        
        When you create an Amazon EKS cluster, a
        [cluster security group](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html)
        is automatically created as well. This security group is designed to allow
        all traffic from the control plane and managed node groups to flow freely
        between each other.
        
        The ID for that security group can be retrieved after creating the cluster.
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        cluster_security_group_id = cluster.cluster_security_group_id
        ```
        
        ### Cluster Encryption Configuration
        
        When you create an Amazon EKS cluster, envelope encryption of
        Kubernetes secrets using the AWS Key Management Service (AWS KMS) can be enabled. The documentation
        on [creating a cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html)
        can provide more details about the customer master key (CMK) that can be used for the encryption.
        
        You can use the `secretsEncryptionKey` to configure which key the cluster will use to encrypt Kubernetes secrets. By default, an AWS Managed key will be used.
        
        > This setting can only be specified when the cluster is created and cannot be updated.
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        secrets_key = kms.Key(self, "SecretsKey")
        cluster = eks.Cluster(self, "MyCluster",
            secrets_encryption_key=secrets_key
        )
        ```
        
        The Amazon Resource Name (ARN) for that CMK can be retrieved.
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        cluster_encryption_config_key_arn = cluster.cluster_encryption_config_key_arn
        ```
        
        ### Node ssh Access
        
        If you want to be able to SSH into your worker nodes, you must already
        have an SSH key in the region you're connecting to and pass it, and you must
        be able to connect to the hosts (meaning they must have a public IP and you
        should be allowed to connect to them on port 22):
        
        ```python
        # Example automatically generated. See https://github.com/aws/jsii/issues/826
        asg = cluster.add_capacity("Nodes",
            instance_type=ec2.InstanceType("t2.medium"),
            vpc_subnets=SubnetSelection(subnet_type=ec2.SubnetType.PUBLIC),
            key_name="my-key-name"
        )
        
        # Replace with desired IP
        asg.connections.allow_from(ec2.Peer.ipv4("1.2.3.4/32"), ec2.Port.tcp(22))
        ```
        
        If you want to SSH into nodes in a private subnet, you should set up a
        bastion host in a public subnet. That setup is recommended, but is
        unfortunately beyond the scope of this documentation.
        
        ### Helm Charts
        
        The `HelmChart` construct or `cluster.addChart` method can be used
        to add Kubernetes resources to this cluster using Helm.
        
        > When using `cluster.addChart`, the manifest construct is defined within the cluster's stack scope. If the manifest contains
        > attributes from a different stack which depend on the cluster stack, a circular dependency will be created and you will get a synth time error.
        > To avoid this, directly use `new HelmChart` to create the chart in the scope of the other stack.
        
        The following example will install the [NGINX Ingress Controller](https://kubernetes.github.io/ingress-nginx/)
        to your cluster using Helm.
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        # option 1: use a construct
        HelmChart(self, "NginxIngress",
            cluster=cluster,
            chart="nginx-ingress",
            repository="https://helm.nginx.com/stable",
            namespace="kube-system"
        )
        
        # or, option2: use `addChart`
        cluster.add_chart("NginxIngress",
            chart="nginx-ingress",
            repository="https://helm.nginx.com/stable",
            namespace="kube-system"
        )
        ```
        
        Helm charts will be installed and updated using `helm upgrade --install`, where a few parameters
        are being passed down (such as `repo`, `values`, `version`, `namespace`, `wait`, `timeout`, etc).
        This means that if the chart is added to CDK with the same release name, it will try to update
        the chart in the cluster. The chart will exists as CloudFormation resource.
        
        Helm charts are implemented as CloudFormation resources in CDK.
        This means that if the chart is deleted from your code (or the stack is
        deleted), the next `cdk deploy` will issue a `helm uninstall` command and the
        Helm chart will be deleted.
        
        When there is no `release` defined, the chart will be installed using the `node.uniqueId`,
        which will be lower cased and truncated to the last 63 characters.
        
        By default, all Helm charts will be installed concurrently. In some cases, this
        could cause race conditions where two Helm charts attempt to deploy the same
        resource or if Helm charts depend on each other. You can use
        `chart.node.addDependency()` in order to declare a dependency order between
        charts:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        chart1 = cluster.add_chart(...)
        chart2 = cluster.add_chart(...)
        
        chart2.node.add_dependency(chart1)
        ```
        
        ### Bottlerocket
        
        [Bottlerocket](https://aws.amazon.com/bottlerocket/) is a Linux-based open-source operating system that is purpose-built by Amazon Web Services for running containers on virtual machines or bare metal hosts. At this moment the managed nodegroup only supports Amazon EKS-optimized AMI but it's possible to create a capacity of self-managed `AutoScalingGroup` running with bottlerocket Linux AMI.
        
        > **NOTICE**: Bottlerocket is in public preview and only available in [some supported AWS regions](https://github.com/bottlerocket-os/bottlerocket/blob/develop/QUICKSTART.md#finding-an-ami).
        
        The following example will create a capacity with self-managed Amazon EC2 capacity of 2 `t3.small` Linux instances running with `Bottlerocket` AMI.
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        # add bottlerocket nodes
        cluster.add_capacity("BottlerocketNodes",
            instance_type=ec2.InstanceType("t3.small"),
            min_capacity=2,
            machine_image_type=eks.MachineImageType.BOTTLEROCKET
        )
        ```
        
        To define only Bottlerocket capacity in your cluster, set `defaultCapacity` to `0` when you define the cluster as described above.
        
        Please note Bottlerocket does not allow to customize bootstrap options and `bootstrapOptions` properties is not supported when you create the `Bottlerocket` capacity.
        
        ### Service Accounts
        
        With services account you can provide Kubernetes Pods access to AWS resources.
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        # add service account
        sa = cluster.add_service_account("MyServiceAccount")
        
        bucket = Bucket(self, "Bucket")
        bucket.grant_read_write(service_account)
        
        mypod = cluster.add_manifest("mypod",
            api_version="v1",
            kind="Pod",
            metadata={"name": "mypod"},
            spec={
                "service_account_name": sa.service_account_name,
                "containers": [{
                    "name": "hello",
                    "image": "paulbouwer/hello-kubernetes:1.5",
                    "ports": [{"container_port": 8080}]
                }
                ]
            }
        )
        
        # create the resource after the service account
        mypod.node.add_dependency(sa)
        
        # print the IAM role arn for this service account
        cdk.CfnOutput(self, "ServiceAccountIamRole", value=sa.role.role_arn)
        ```
        
Platform: UNKNOWN
Classifier: Intended Audience :: Developers
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: JavaScript
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Typing :: Typed
Classifier: Development Status :: 4 - Beta
Classifier: License :: OSI Approved
Requires-Python: >=3.6
Description-Content-Type: text/markdown
