Creating EKS Cluster with Fargate nodes using Terraform

January 22, 2024

amazon_eks_logo

Link to Github code repo here

Prerequisites:

  1. AWS Account: You must have an active AWS account. If you don't have one, you can sign up for an AWS account on the AWS website. You can create it here
  2. IAM User or Role: Create an IAM (Identity and Access Management) user or role in your AWS account with the necessary permissions to create and manage EKS clusters. At a minimum, the user or role should have permissions to create EKS clusters, EC2 instances, VPCs, and related resources.
  3. AWS CLI: Install and configure the AWS Command Line Interface (CLI) on your local machine. You'll use the AWS CLI to interact with your AWS account and configure your AWS credentials. You can download it here
  4. Terraform Installed: Install Terraform on your local machine. You can download Terraform from the official Terraform website and follow the installation instructions for your operating system here

What is Eks Fargate nodes?

Amazon EKS with Fargate provides a serverless compute engine for containers, allowing you to run containers without having to manage the underlying EC2 instances. When you use EKS with Fargate, you don't need to provision or manage EC2 instances for your worker nodes. Instead, AWS Fargate takes care of the underlying infrastructure for you. Key features:

  • Fargate abstracts away the underlying infrastructure, providing a serverless experience for running containers. You only need to define your containers, and Fargate takes care of the rest.
  • With EKS Fargate nodes, there is no need to manually provision or manage EC2 instances as worker nodes. This simplifies the operational overhead and allows you to focus more on your applications.
  • Fargate allows you to specify the CPU and memory resources for each pod in your Kubernetes cluster. This enables you to allocate resources based on your application's specific requirements.
  • Fargate supports automatic scaling of pods based on resource utilization. This ensures that your applications have the necessary resources to meet demand while optimizing costs during periods of lower demand.
  • Each pod running on Fargate is isolated from other pods, providing strong security boundaries. The underlying infrastructure is managed by AWS, and the tasks are executed in an isolated environment.

Terraform code

As alway we will use our favourite terraform module from here

You can find our full terraform code in our repo While we would discuss main changes in our code since our previous article

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "19.21.0"

  cluster_name    = local.cluster_name
  cluster_version = var.cluster_version
  cluster_endpoint_public_access = true

  vpc_id                   = module.vpc.vpc_id
  subnet_ids               = module.vpc.private_subnets
  control_plane_subnet_ids = module.vpc.intra_subnets

  cluster_addons = {
    kube-proxy = {}
    vpc-cni    = {}
    coredns = {
      configuration_values = jsonencode({
        computeType = "Fargate"
      })
    }
  }

  create_cluster_security_group = false
  create_node_security_group    = false

  fargate_profile_defaults = {
      iam_role_additional_policies = {
        additional = aws_iam_policy.additional.arn
      }
    }

  fargate_profiles = merge(
    {
      nginx = {
        name = "nginx"
        selectors = [
          {
            namespace = "backend"
            labels = {
              Application = "backend"
            }
          },
          {
            namespace = "app-*"
            labels = {
              Application = "app-wildcard"
            }
          }
        ]

        # Using specific subnets instead of the subnets supplied for the cluster itself
        subnet_ids = [module.vpc.private_subnets[1]]

        tags = {
          Owner = "secondary"
        }

        timeouts = {
          create = "20m"
          delete = "20m"
        }
      }
    },
    { for i in range(3) :
      "kube-system-${element(split("-", var.aws_availability_zones[i]), 2)}" => {
        selectors = [
          { namespace = "kube-system" }
        ]
        # We want to create a profile per AZ for high availability
        subnet_ids = [element(module.vpc.private_subnets, i)]
      }
    }
  )

  tags = local.tags
}
  • cluster_addons section: This block is used to configure certain add-ons that are associated with our EKS cluster. 'kube-proxy' and 'vpc-cni' will be used with default configurations while 'coredns' is configured with a JSON-encoded string that specifies the configuration for CoreDNS. In this case, it sets the computeType to "Fargate".
  • fargate_profile_defaults: here we would add additional policy to our fargate profile which we would discuss a bit later.
  • nginx Fargate Profile: Fargate profile named "nginx" is defined. It includes selectors for pods based on namespaces and labels. Subnet IDs are specified to override the subnets supplied for the cluster itself. The Fargate pods associated with this profile will use the specific subnet provided (module.vpc.private_subnets[1]).
  • High Availability Profiles for kube-system: A for loop is used to create Fargate profiles for the "kube-system" namespace, each associated with a different availability zone. The Fargate profiles are named using the availability zone, and they include selectors for the "kube-system" namespace. Subnet IDs are specified based on the private subnets associated with each availability zone.

The merge function is used to combine these Fargate profile configurations into a single map.

Deployment

In order to initialize terraform and download modules run:

`terraform init` 

You can also check which resources terraform is planning to create by running:

terraform plan

To provision resources run:

terraform apply

Testing

After Terraform applied you should see the following output:

Apply complete! Resources: 66 added, 0 changed, 0 destroyed.

Outputs:

connect_to_eks = "aws eks --region <YOUR_REGION> update-kubeconfig --name <CLUSTER_NAME> --profile default"
endpoint = "<CLUSTER_ENDPOINT>"

Execute command from connect_to_eks output in order to generate kubeconfig file:

aws eks --region <YOUR_REGION> update-kubeconfig --name <CLUSTER_NAME> --profile default

Verify conectivity to the cluster with kubectl:

kubectl get no

You should see list of nodes:

NAME                                 STATUS   ROLES    AGE     VERSION
fargate-ip-10-0-1-102.ec2.internal   Ready    <none>   93m     v1.27.7-eks-4f4795d
fargate-ip-10-0-1-84.ec2.internal    Ready    <none>   93m     v1.27.7-eks-4f4795d

Now let's try to deploy nginx image and see what's gonna happen. In order to trigger Fargate scheduler not a Default we would need to specify labels and tolerations in our deployment, our containers would stay in a pending state or created on regular nodes. Deployment.yaml:

---
apiVersion: v1
kind: Namespace
metadata:
  name: app-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: app-nginx
  name: app-nginx
spec:
  selector:
    matchLabels:
      Application: app-wildcard
  replicas: 1
  template:
    metadata:
      labels:
        Application: app-wildcard
    spec:
      tolerations:
      - key: eks.amazonaws.com/compute-type
        operator: Equal
        value: fargate
        effect: NoSchedule
      containers:
      - image: nginx:latest
        imagePullPolicy: Always
        name: app-nginx
        ports:
        - containerPort: 80

As You can see first we create a namespace in order to match our fargate profile we created in terraform code here:

            namespace = "app-*"
            labels = {
              Application = "app-wildcard"
            }

Also we would need to specify tolerations and labels acordingly. Let's apply:

kubectl apply -f https://raw.githubusercontent.com/cloudtipss/EKS-Fargate-nodes/main/app.yaml

You should see the output that namespace and deployment are created. Let's check our cluster nodes again:

user$ kubectl get no
NAME                                 STATUS   ROLES    AGE     VERSION
fargate-ip-10-0-1-102.ec2.internal   Ready    <none>   107m    v1.27.7-eks-4f4795d
fargate-ip-10-0-1-84.ec2.internal    Ready    <none>   107m    v1.27.7-eks-4f4795d
fargate-ip-10-0-2-152.ec2.internal   Ready    <none>   2m35s   v1.27.7-eks-4f4795d

As you can see one more fargate type node added. Let's scale our deployment to 3:

user$ kubectl scale deployment app-nginx --replicas=3 -n app-nginx
deployment.apps/app-nginx scaled

Let's check our nodes and podes now:

user$ kubectl get no
NAME                                 STATUS   ROLES    AGE     VERSION
fargate-ip-10-0-1-102.ec2.internal   Ready    <none>   111m    v1.27.7-eks-4f4795d
fargate-ip-10-0-1-84.ec2.internal    Ready    <none>   111m    v1.27.7-eks-4f4795d
fargate-ip-10-0-2-152.ec2.internal   Ready    <none>   6m44s   v1.27.7-eks-4f4795d
fargate-ip-10-0-2-194.ec2.internal   Ready    <none>   20s     v1.27.7-eks-4f4795d
fargate-ip-10-0-2-236.ec2.internal   Ready    <none>   24s     v1.27.7-eks-4f4795d
fargate-ip-10-0-2-63.ec2.internal    Ready    <none>   88m     v1.27.7-eks-4f4795d
user$ kubectl get po -n app-nginx
NAME                         READY   STATUS    RESTARTS   AGE
app-nginx-56d447f6fc-cf9z6   1/1     Running   0          84s
app-nginx-56d447f6fc-n85m6   1/1     Running   0          84s
app-nginx-56d447f6fc-nfpsk   1/1     Running   0          7m50s

As you can see everything works as expected, our application is scaled, up and running!

You can find source code in our Github repo