ArgoCD for Eks using Terraform

December 18, 2023


In the ever-evolving landscape of cloud infrastructure, the demand for streamlined and automated Kubernetes cluster provisioning has become paramount. Terraform, a versatile Infrastructure as Code (IaC) tool, coupled with ArgoCD, a declarative GitOps continuous delivery tool for Kubernetes, provides a powerful synergy to simplify the deployment and management of Kubernetes clusters.

Link to Github code repo here


  1. AWS Account: You must have an active AWS account. If you don't have one, you can sign up for an AWS account on the AWS website. You can create it here
  2. IAM User or Role: Create an IAM (Identity and Access Management) user or role in your AWS account with the necessary permissions to create and manage EKS clusters. At a minimum, the user or role should have permissions to create EKS clusters, EC2 instances, VPCs, and related resources.
  3. AWS CLI: Install and configure the AWS Command Line Interface (CLI) on your local machine. You'll use the AWS CLI to interact with your AWS account and configure your AWS credentials. You can download it here
  4. Terraform Installed: Install Terraform on your local machine. You can download Terraform from the official Terraform website and follow the installation instructions for your operating system here


We would use a sample Terraform EKS code, that we discussed in our previous article, you can find more detailed description here

Since we would use helm to install ArgoCD application to created cluster in terraform let's update our file, we would add the following:

terraform {
  required_providers {
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = ">= 2.16.1"
    helm = {
      source  = "hashicorp/helm"
      version = ">= 2.8.0"
  • This block declares the required Terraform providers and their versions. It specifies that the kubernetes provider should be at least version 2.16.1, and the helm provider should be at least version 2.8.0.
locals {
  eks_endpoint       = module.eks.cluster_endpoint
  eks_ca_certificate = module.eks.cluster_certificate_authority_data
  • The locals block defines local variables for the EKS cluster's endpoint (eks_endpoint) and the cluster's CA certificate (eks_ca_certificate). These variables are sourced from the outputs of an EKS module
provider "kubernetes" {
  host  = local.eks_endpoint
  exec {
    api_version = ""
    args        = ["eks", "get-token", "--cluster-name", local.cluster_name]
    command     = "aws"
  cluster_ca_certificate = base64decode(
  • This block configures the kubernetes provider, specifying the EKS cluster's endpoint as the host. It employs the aws CLI to authenticate, obtaining the required credentials using the eks get-token command. The cluster's CA certificate is base64-decoded for secure communication.
provider "helm" {
  kubernetes {
    host                   = local.eks_endpoint
    cluster_ca_certificate = base64decode(
    exec {
      api_version = ""
      args        = ["eks", "get-token", "--cluster-name", local.cluster_name]
      command     = "aws"
  • The helm provider block configures Helm to interact with the EKS cluster. It specifies the EKS cluster's endpoint and CA certificate, similar to the Kubernetes provider. The aws CLI is used for authentication, obtaining the required credentials with the eks get-token command

To make things clear we would create a separate file for ArgoCD install -

resource "helm_release" "argocd" {
  count            = var.enable_argocd_helm_release ? 1 : 0
  name             = var.argocd_helm_release_name
  namespace        = var.argocd_k8s_namespace
  repository       = var.argocd_helm_repo
  chart            = var.argocd_helm_chart
  version          = var.argocd_helm_chart_version
  timeout          = var.argocd_helm_chart_timeout_seconds
  create_namespace = true
  • Count attribute is conditional, creating the ArgoCD Helm release only if the variable enable_argocd_helm_release is set to true
  • Name and Namespace provides a customizable way to name the ArgoCD installation and define the Kubernetes namespace
  • repository, chart and version - defines the Helm chart details, including the repository URL, the chart name, and the desired version to deploy.
  • timeout - specifies the maximum time (in seconds) Terraform waits for the Helm chart to be deployed before timing out. It allows customization of the deployment timeout duration.
  • create_namespace - Indicates that Terraform should create the Kubernetes namespace specified in var.argocd_k8s_namespace if it does not already exist. This ensures the designated namespace is available for the ArgoCD deployment.

Since we using variables let's define them in

variable "enable_argocd_helm_release" {
  type        = bool
  default     = true
  description = "Enable/disable ArgoCD Helm chart deployment on DOKS"

variable "argocd_helm_repo" {
  type        = string
  default     = ""
  description = "ArgoCD Helm chart repository URL"

variable "argocd_helm_chart" {
  type        = string
  default     = "argo-cd"
  description = "argocd Helm chart name"

variable "argocd_helm_release_name" {
  type        = string
  default     = "argocd"
  description = "argocd Helm release name"

variable "argocd_helm_chart_version" {
  type        = string
  default     = "5.16.14"
  description = "ArgoCD Helm chart version to deploy"
variable "argocd_helm_chart_timeout_seconds" {
  type        = number
  default     = 300
  description = "Timeout value for Helm chart install/upgrade operations"

variable "argocd_k8s_namespace" {
  type        = string
  default     = "argocd"
  description = "Kubernetes namespace to use for the argocd Helm release"


In order to initialize terraform and download modules run:

`terraform init` 

You can also check which resources terraform is planning to create by running:

terraform plan

To provision resources run:

terraform apply


After Terraform applied you should see the following output:

Apply complete! Resources: 61 added, 0 changed, 0 destroyed.


connect_to_eks = "aws eks --region <YOUR_REGION> update-kubeconfig --name <CLUSTER_NAME> --profile default"
endpoint = "<CLUSTER_ENDPOINT>"

Execute command from connect_to_eks output in order to generate kubeconfig file:

aws eks --region <YOUR_REGION> update-kubeconfig --name <CLUSTER_NAME> --profile default

Verify conectivity to the cluster with kubectl:

kubectl get no

You should see list of nodes:

NAME                        STATUS   ROLES    AGE   VERSION
ip-10-0-1-9.ec2.internal    Ready    <none>   33m   v1.27.7-eks-e71965b
ip-10-0-3-76.ec2.internal   Ready    <none>   34m   v1.27.7-eks-e71965b

Now we can check if ArgoCD is up and running:

kubectl get po --namespace argocd

NAME                                                READY   STATUS    RESTARTS      AGE
argocd-application-controller-0                     1/1     Running   0             35m
argocd-applicationset-controller-7558f88dd6-rkmpj   1/1     Running   0             35m
argocd-dex-server-84bd778f5b-lqzz2                  1/1     Running   2 (34m ago)   35m
argocd-notifications-controller-7bf8475f6b-w94k8    1/1     Running   0             35m
argocd-redis-5969b95845-pjc8c                       1/1     Running   0             35m
argocd-repo-server-65d8dd55fb-g8q65                 1/1     Running   0             35m
argocd-server-5b767bb4f-mqhbv                       1/1     Running   0             35m

Now we can get ArgoCD admin password, run:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo

Save outputed password, and use it to login to your app after running following command:

kubectl -n argocd port-forward svc/argocd-server 8080:80

Open your browser and navigate to localhost:8080 since we didn't install any cert you will see warning, click on proceed, use password outputed in a previous command and admin username

Now let's deploy test application to see if it working corectly

After you logged in do the following:

  1. Click on New App
  2. Type in fields : Application Name, Project Name you can leave SYNC POLICY as Manual
  3. In a SOURCE block you can add Github repo with a testapp and add path testapp
  4. Destination block - https://kubernetes.default.svc it would install the app on the same cluster where Argo is installed, if you would leave namespace field as default your app will be created in a default namespace
  5. Click on Create
  6. Click on Sync and Synchronize

You should see the status of the app as Healthy and Synced. Now lets check if it is app and running:

kubectl get po
NAME                     READY   STATUS    RESTARTS   AGE
nginx-57d84f57dc-mgzpl   1/1     Running   0          2m27s

As you can see nginx application is up and running!

You can find source code in our Github repo