Setting up the Amazon EKS environment

Before you deploy Tonic Ephemeral to an Amazon EKS cluster, you must set up the Amazon EKS cluster as follows.

Note that these instructions are specific to Amazon EKS. For similar information about setting up a cluster in Azure AKS or Google Cloud Platform (GCP) Google Kubernetes Engine (GKE), contact Tonic.ai support.

The public GitHub repository ephemeral-eks-setup provides access to the scripts and configuration for these steps.

Setting up the Kubernetes cluster

The cluster must meet the following initial criteria:

  • Sufficient capacity for Ephemeral and Tonic Structural

  • Native installation of the following add-ons:

    • vpc-cni - Allows direct mapping of AWS IP addresses to pods

    • aws-ebs-csi-driver - Allows persistent volumes to be provisioned from Amazon EBS

  • Provides an OIDC provider, to allow the correct permissions for cluster service accounts

Creating the cluster setup configuration file

You can put these requirements into a cluster setup configuration YAML file that uses the following template.

In the GitHub repository: eks-cluster-setup-template.yml

In the file:

  • Replace $CLUSTER_NAME with the name of the cluster

  • Replace $AWS_REGION with the AWS Region where the cluster is located

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: $CLUSTER_NAME
  region: $AWS_REGION

nodeGroups:
  - name: standard
    # Capacity
    # t3.2xlarge = 8 vCPU / 32 GB RAM
    # Recommended for Tonic is 8 vCPU / 32 GB RAM / 250GB disk
    instanceType: t3.2xlarge
    desiredCapacity: 2
    volumeSize: 50

iam:
  withOIDC: true # Crucial for IAM Roles for Service Accounts (IRSA)

# Install essential addons
addons:
  # coredns and kube-proxy are usually installed by default.
  # Listing explicityly as a good practice.
  - name: kube-proxy
    resolveConflicts: overwrite
  - name: coredns
    resolveConflicts: overwrite
  # vpc-cni allows AWS IP addresses to be directly mapped to pods. 
  - name: vpc-cni
    resolveConflicts: overwrite
  # aws-csi allows persistent volumes to be provisioned from Amazon EBS
  - name: aws-ebs-csi-driver 
    resolveConflicts: overwrite 
    version: "v1.27.0-eksbuild.1" 

Creating the cluster

In the GitHub repository: 01-cluster-setup.sh

To use the configuration file to create the cluster:

eksctl create cluster -f <name of the configuration file>

For example:

eksctl create cluster -f eks-cluster-setup.yml

Setting up a hosted zone

When Ephemeral creates a database, it provides connection details in the form of hostname, port, username, password, and database name.

It must create DNS records that map hostnames to locatable IP addresses.

To do this, Ephemeral uses the external-dns Kubernetes add-on. This acts as a bridge between Kubernetes and an external DNS provider. For AWS, the provider is Route53.

You create a hosted zone in Route53 that allows the creation of DNS records. The recommended approach is to make Route53 the system of record for a sub-domain of your actual domain. For example, if your domain is mydomain.net, then the sub-domain might be ephemeral.mydomain.net.

EPHEMERAL_DOMAIN=ephemeral.mydomain.net

Create the hosted zone

In the GitHub repository: 02-create-hosted-dns-zone.sh

To create a hosted zone use the following command. A caller reference is required to distinguish intentional duplicate requests from unintentional.

aws route53 create-hosted-zone --name "${EPHEMERAL_DOMAIN}" --caller-reference "${CLUSTER_NAME}-external-dns")

Add the hosted zone nameservers to the domain

To make DNS entries resolvable you add nameserver records to your top level domain.

To obtain the nameservers for the hosted zone:

ZONE_ID=$(aws route53 list-hosted-zones-by-name --query "HostedZones[?Name=='${EPHEMERAL_DOMAIN}.'].Id | [0]" --out text)

aws route53 list-resource-record-sets --output text \\
 --hosted-zone-id $ZONE_ID --query \\
 "ResourceRecordSets[?Type == 'NS'].ResourceRecords[*].Value | [] | [0:2]"

You then add the nameserver records to the domain. For example:

Example nameserver records for a hosted zone

Creating a service account for external DNS

In the GitHub repository: 03-create-service-account.sh

The external-dns service account must have permissions that allow it to write records to the hosted zone.

Create a policy document for the hosted zone access

Create the following policy document. In the document, replace $ZONE_ID with the identifier of your hosted zone.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "route53:ChangeResourceRecordSets"
      ],
      "Resource": [
        "arn:aws:route53:::$ZONE_ID"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "route53:ListHostedZones",
        "route53:GetChange",
        "route53:ListResourceRecordSets",
        "route53:ListTagsForResources"
      ],
      "Resource": [
        "*"
      ]
    }
  ]
}

Create the hosted zone access policy

Use the policy document to create the policy:

EXTERNAL_DNS_POLICY_ARN=$(aws iam create-policy --policy-name $EXTERNAL_DNS_POLICY_NAME --policy-document --query "Policy.Arn" --output text file://${DNS_POLICY})

Create a trust policy document

Next, create a trust policy document that provides trust to accounts managed by the Kubernetes cluster OIDC provider, specifically when that account is the service account for external DNS.

To obtain the required values:

export OIDC_PROVIDER=$(aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text | sed -e 's|^https://||')
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)

Merge the values into the following template document:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::$AWS_ACCOUNT_ID:oidc-provider/$OIDC_PROVIDER"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "$OIDC_PROVIDER:sub": "system:serviceaccount:${EXTERNAL_DNS_NAMESPACE}:${EXTERNAL_DNS_SERVICE_ACCOUNT}",
                    "$OIDC_PROVIDER:aud": "sts.amazonaws.com"
                }
            }
        }
    ]
}

Create a role with the trust policy and hosted role access policy

Create a role that has is associated with the trust policy:

EXTERNAL_DNS_ROLE_ARN=$(aws iam create-role --role-name $EXTERNAL_DNS_ROLE --assume-role-policy-document --query "Role.Arn" file://${SERVICE_ACCOUNT_TRUST_POLICY})

Attach the hosted zone access policy:

aws iam attach-role-policy --role-name $EXTERNAL_DNS_ROLE --policy-arn $EXTERNAL_DNS_POLICY_ARN

Define the service account

Create a YAML file that defines the external-dns service account and associates the role with it.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: $EXTERNAL_DNS_SERVICE_ACCOUNT
  namespace: $EXTERNAL_DNS_NAMESPACE
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::$ACCOUNT_ID:role/$EXTERNAL_DNS_ROLE

Create the external DNS namespace and service account

Use the YAML file to create the DNS namespace and the service account:

kubectl create namespace $EXTERNAL_DNS_NAMESPACE
kubectl create -f $SERVICE_ACCOUNT_YML -n $EXTERNAL_DNS_NAMESPACE

Installing external-dns

In the GitHub respository: 04-external-dns-install.sh

helm install external-dns external-dns/external-dns \\
--set serviceAccount.create=false \\
--set serviceAccount.name=${EXTERNAL_DNS_SERVICE_ACCOUNT} \\
--set route53.zoneIDFilters[0]=${ZONE_ID} \\
--set extraArgs[0]="--domain-filter=${EPHEMERAL_DOMAIN}" \\
--namespace $EXTERNAL_DNS_NAMESPACE

Installing the CSI snapshotter

In the GitHub repository: 05-install-snapshot-class.sh

Set up the CSI snapshot class, which enables volume snapshots.

RELEASE_VERSION="release-6.2"

kubectl kustomize "<https://github.com/kubernetes-csi/external-snapshotter/client/config/crd?ref=$RELEASE_VERSION>" | kubectl apply -f -
kubectl kustomize <https://github.com/kubernetes-csi/external-snapshotter/deploy/kubernetes/snapshot-controller> | kubectl create -f -
kubectl kustomize <https://github.com/kubernetes-csi/external-snapshotter/deploy/kubernetes/csi-snapshotter> | kubectl create -f -

Installing an application load balancer

To enable access to your Ephemeral instance through a public IP address, you must install an AWS application load balancer.

Tag the public subnets

In the GitHub repository: 01-tagSubnetsforLB.sh

You must tag the public subnets as follows:

# Get the VPC_ID for your cluster
VPC_ID=$(eksctl get cluster --name $CLUSTER_NAME -o json  \\
 | jq -r '.[0].ResourcesVpcConfig.VpcId')

# Get subnet IDs for VPC
SUBNET_IDS=($(aws ec2 describe-subnets --filter Name=vpc-id,Values=${VPC_ID} \\
 --query 'Subnets[?MapPublicIpOnLaunch==`true`].SubnetId' --output json  \\
 | jq -r '.[]'))

# Tag subnet ids
for SUBNET_ID in "${SUBNET_IDS[@]}"                          
do
	echo "Tagging ${SUBNET_ID}"
	aws ec2 create-tags --resources "${SUBNET_ID}" \\
	--tags Key=kubernetes.io/role/elb,Value=1 
done

Create a service account

In the GitHub repository: 02-create-alb-service-account.sh

You must correctly create and permission the service account that executes the load balancer operations.

POLICY_FILE=lb-policy.json

echo "Getting IAM policy for AWS load balancer"
curl -o $POLICY_FILE \\
<https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.13.0/docs/install/iam_policy.json> 2>/dev/null

echo "Creating policy $LB_POLICY_NAME"
LB_POLICY_ARN=$(aws iam create-policy --policy-name $LB_POLICY_NAME --policy-document file://$POLICY_FILE --query 'Policy.Arn' --output text)

echo "Associate IAM IdP with cluster $CLUSTER_NAME"
eksctl utils associate-iam-oidc-provider --region $AWS_REGION --cluster $CLUSTER_NAME --approve

# Create service account with required security policy
echo "Create IAM service account for load balancer"
eksctl create iamserviceaccount \\
	--cluster $CLUSTER_NAME \\
	--namespace kube-system \\
	--name $LB_SERVICE_ACCOUNT_NAME \\
	--attach-policy-arn $POLICY_ARN \\
	--override-existing-serviceaccounts \\
	--approve

Install the load balancer

In the GitHub repository: 03-lb-install.sh

To install the load balancer:

VPC_ID=$(eksctl get cluster --name $CLUSTER_NAME -o json | jq -r '.[0].ResourcesVpcConfig.VpcId')

# Install Amazon EKS repo 
EKS_REPO_INSTALLED=$(helm repo list | grep eks)
if [[ -z $EKS_REPO_INSTALLED ]]
then
	echo "Adding Amazon EKS repo"
	helm repo add eks <https://aws.github.io/eks-charts>
fi

# Updating repo
echo "Updating helm repos"
helm repo update
echo

echo "Installing load balancer"
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system \\
  --set clusterName=$CLUSTER_NAME \\
  --set serviceAccount.create=false \\
  --set serviceAccount.name=$LB_SERVICE_ACCOUNT_NAME \\
  --set image.repository=602401143452.dkr.ecr.eu-central-1.amazonaws.com/amazon/aws-load-balancer-controller \\
  --set region=$AWS_REGION \\
  --set vpcId=$VPC_ID

Last updated

Was this helpful?