In today's we would create a Terraform code for EKS cluster to get value from a secret in AWS Secrets Manager
Link to Github code repo here
Prerequisites:
- AWS Account: You must have an active AWS account. If you don't have one, you can sign up for an AWS account on the AWS website. You can create it here
- IAM User or Role: Create an IAM (Identity and Access Management) user or role in your AWS account with the necessary permissions to create and manage EKS clusters. At a minimum, the user or role should have permissions to create EKS clusters, EC2 instances, VPCs, and related resources.
- AWS CLI: Install and configure the AWS Command Line Interface (CLI) on your local machine. You'll use the AWS CLI to interact with your AWS account and configure your AWS credentials. You can download it here
- Terraform Installed: Install Terraform on your local machine. You can download Terraform from the official Terraform website and follow the installation instructions for your operating system here
Provider.tf
Since we going to create EKS Cluster, Deployment, AWS Secret using terraform we would need to specify following providers:
provider "aws" {
region = var.aws_region
profile = var.aws_profile
}
terraform {
required_version = ">= 0.13"
required_providers {
kubectl = {
source = "gavinbunney/kubectl"
version = ">= 1.7.0"
}
aws = {
source = "hashicorp/aws"
version = "5.0.0"
}
helm = {
source = "hashicorp/helm"
version = "2.3.0"
}
}
}
provider "kubernetes" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
args = ["eks", "get-token", "--cluster-name", "eks-with-secrets", "--profile", "default"]
command = "aws"
}
}
provider "helm" {
kubernetes {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.default.token
}
}
provider "kubectl" {
apply_retry_count = 5
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
load_config_file = false
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", "eks-with-secrets", "--profile", "default"]
}
}
Main.tf
In our main.tf file we would create a VPC and EKS Cluster using our code from a previous articles:
locals {
cluster_name = "${var.env}-eks-${random_string.suffix.result}"
}
resource "random_string" "suffix" {
length = 8
special = false
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.1.2"
name = var.vpc_name
cidr = var.cidr
azs = var.aws_availability_zones
private_subnets = var.private_subnets
public_subnets = var.public_subnets
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
public_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = 1
}
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.16.0"
cluster_name = "eks-with-secrets"
cluster_endpoint_public_access = true
enable_irsa = true
cluster_version = "1.27"
subnet_ids = module.vpc.private_subnets
vpc_id = module.vpc.vpc_id
eks_managed_node_groups = {
on_demand_1 = {
min_size = 1
max_size = 3
desired_size = 1
instance_types = ["t3.small"]
capacity_type = "ON_DEMAND"
}
}
}
data "aws_eks_cluster_auth" "default" {
name = "eks-with-secrets"
}
IAM.tf
In your project directory create a file iam.tf
:
data "aws_iam_policy_document" "assume_role_policy" {
statement {
actions = [
"sts:AssumeRoleWithWebIdentity"]
effect = "Allow"
condition {
test = "StringEquals"
variable = "${replace(module.eks.cluster_oidc_issuer_url, "https://", "")}:sub"
values = [
"system:serviceaccount:default:secrets-sa"]
}
principals {
identifiers = [
module.eks.oidc_provider_arn]
type = "Federated"
}
}
}
resource "aws_iam_role" "secret_role" {
assume_role_policy = data.aws_iam_policy_document.assume_role_policy.json
name = "secret-role"
}
resource "aws_iam_policy" "policy" {
name = "aws-secrets-manager-policy-eks-cluster"
description = "A SM policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [ {
"Effect": "Allow",
"Action": ["secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret"],
"Resource": ["arn:*:secretsmanager:*:*:secret:*"]
} ]
}
EOF
}
resource "aws_iam_role_policy_attachment" "sm-attach" {
role = aws_iam_role.secret_role.name
policy_arn = aws_iam_policy.policy.arn
}
"aws_iam_policy_document" "assume_role_policy"
- here we will allow service account of our cluster to assume a role"aws_iam_role" "secret_role"
- we are creating a role named secret-role with attached policy generated before"aws_iam_policy" "policy"
- we create a policy to allow actions in AWS Secrets Manager"aws_iam_role_policy_attachment" "sm-attach"
- we attach a policy to a created secret-role
Secret.tf
Let's create a secret in AWS Secret manager:
resource "aws_secretsmanager_secret" "db_secret" {
name = "db-secret"
}
resource "null_resource" "set_secret_value" {
triggers = {
secret_id = aws_secretsmanager_secret.db_secret.id
}
provisioner "local-exec" {
command = <<EOT
aws secretsmanager put-secret-value --secret-id ${aws_secretsmanager_secret.db_secret.id} --secret-string '{"password": "12345"}'
EOT
}
}
data "aws_caller_identity" "current" {}
resource "aws_secretsmanager_secret_policy" "db_secret_policy" {
secret_arn = aws_secretsmanager_secret.db_secret.arn
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "SecretRoleAccess"
Effect = "Allow"
Principal = {
AWS = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/${aws_iam_role.secret_role.name}"
}
Action = [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"secretsmanager:ListSecretVersionIds"
]
Resource = aws_secretsmanager_secret.db_secret.arn
}
]
})
}
aws_secretsmanager_secret
- this resource will create a empty secret in our AWS Account, we will call itdb-secret
null_resource
- here we will execute aws cli command to provide a key and a value to our created secretaws_secretsmanager_secret_policy
- create policy to alow secret-role access to the secret
secret-store-k8s.tf
Now we will create a secret-store-k8s.tf
resource "helm_release" "secret-store" {
name = "secrets-store"
namespace = "kube-system"
repository = "https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts"
chart = "secrets-store-csi-driver"
set {
name = "syncSecret.enabled"
value = true
}
set {
name = "enableSecretRotation"
value = true
}
}
resource "helm_release" "aws_secrets_provider" {
name = "aws-secrets-provider"
namespace = "kube-system"
repository = "https://aws.github.io/secrets-store-csi-driver-provider-aws"
chart = "secrets-store-csi-driver-provider-aws"
}
resource "kubernetes_service_account" "csi-aws" {
metadata {
name = "csi-secrets-store-provider-aws"
namespace = "kube-system"
}
}
resource "kubectl_manifest" "secret-class" {
depends_on = [module.eks]
yaml_body = <<YAML
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: secret
spec:
provider: aws
secretObjects:
- secretName: databasepassword
type: Opaque
data:
- objectName: "MySecretPassword"
key: password
parameters:
objects: |
- objectName: "${aws_secretsmanager_secret.db_secret.arn}"
jmesPath:
- path: "password"
objectAlias: "MySecretPassword"
YAML
}
"helm_release" "secret-store"
- This block deploys the Secrets Store CSI Driver into the Kubernetes cluster."helm_release" "aws_secrets_provider"
- This block deploys the AWS Secrets Provider for the Secrets Store CSI Driver into the Kubernetes cluster. It allows you to use AWS Secrets Manager as a source for your Kubernetes secrets."kubernetes_service_account" "csi-aws"
- This resource creates a Kubernetes service account. It is used to define the service account associated with pods that use the Secrets Store CSI Driver."kubectl_manifest" "secret-class"
- This resource creates a SecretProviderClass and specifies the provider as "aws." This class is used for dynamically creating Kubernetes secrets based on AWS Secrets Manager secrets.
The spec block includes configurations to map AWS Secrets Manager secrets to Kubernetes secrets. It defines the AWS secret object and how to map its fields to the Kubernetes secret.
Deployment.tf
Now let's create a sample deployment to check if it is able to retrieve our secret value. Create deployment.tf
:
resource "kubectl_manifest" "test_deployment" {
depends_on = [module.eks, kubectl_manifest.secrets-sa, aws_secretsmanager_secret.db_secret]
yaml_body = <<YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
spec:
replicas: 1
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
spec:
containers:
- image: nginx
name: test-app
volumeMounts:
- name: secret-volume
mountPath: "/var/run/secrets/db-secret"
env:
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: databasepassword
key: password
serviceAccountName: secrets-sa
volumes:
- name: secret-volume
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
objectVersion: AWSCURRENT
secretProviderClass: secret
YAML
}
resource "kubectl_manifest" "secrets-sa" {
depends_on = [module.eks]
yaml_body = <<YAML
apiVersion: v1
kind: ServiceAccount
metadata:
name: secrets-sa
annotations:
eks.amazonaws.com/role-arn: ${aws_iam_role.secret_role.arn}
automountServiceAccountToken: true
secrets:
- name: token
YAML
}
"kubectl_manifest" "test_deployment"
- This block creates a Kubernetes Deployment named "test-deployment."
The Deployment manages a single replica of an Nginx container, which serves as a simple test application. It references a ServiceAccount named "secrets-sa," which provides the necessary permissions to access secrets securely. A Kubernetes Secret named "databasepassword" is mounted into the container as a volume, and the application accesses the DATABASE_PASSWORD environment variable from this secret.
"kubectl_manifest" "secrets-sa"
- his block defines a Kubernetes ServiceAccount named "secrets-sa." The ServiceAccount is associated with an AWS IAM role, and it has annotations specifying the AWS IAM role ARN (provided as ${aws_iam_role.secret_role.arn}). It also specifies that the ServiceAccount can use a secret named "token."
Variables and Outputs
Create a variables.tf
:
variable "aws_profile" {
description = "Set this variable if you use another profile besides the default awscli profile called 'default'."
type = string
default = "default"
}
variable "aws_region" {
description = "Set this variable if you use another aws region."
type = string
default = "us-east-1"
}
variable "vpc_name" {
description = "Vpc name that would be created for your cluster"
type = string
default = "EKS_vpc"
}
variable "aws_availability_zones" {
description = "AWS availability zones"
default = ["us-east-1a", "us-east-1b", "us-east-1c"]
}
variable "cidr" {
description = "Cird block for your VPC"
type = string
default = "10.0.0.0/16"
}
variable "env" {
description = "it would be a prefix for you cluster name created, typically specified as dev or test"
type = string
default = "dev"
}
variable "private_subnets" {
description = "private subnets to create, need to have 1 for each AZ"
default = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
}
variable "public_subnets" {
description = "public subnets to create, need to have 1 for each AZ"
default = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
}
Also you can create outputs.tf:
output "endpoint" {
value = module.eks.cluster_endpoint
}
output "connect_to_eks" {
value = "aws eks --region us-east-1 update-kubeconfig --name ${module.eks.cluster_name} --profile default"
}
Deployment and testing
In order to initialize terraform and download modules run:
`terraform init`
You can also check which resources terraform is planning to create by running:
terraform plan
To provision resources run:
terraform apply
After terraform applied you will see similar output:
Apply complete! Resources: 66 added, 0 changed, 0 destroyed.
Outputs:
connect_to_eks = "aws eks --region us-east-1 update-kubeconfig --name eks-with-secrets --profile default"
endpoint = "https://B79A2CFEAC562E393A5A77349849333C.gr7.us-east-1.eks.amazonaws.com"
First execute the command from connect_to_eks
output in order to generate kubeconfig file:
aws eks --region us-east-1 update-kubeconfig --name eks-with-secrets --profile default
Verify conectivity to the cluster with kubectl:
kubectl get po
You should see list of podes in a default namespace:
NAME READY STATUS RESTARTS AGE
test-deployment-5f6bd5b4f-xrvzc 1/1 Running 0 4m17s
Now let's access pod shell to see if we have our secret available. Replace test-deployment-5f6bd5b4f-xrvzc
with your NAME value from a previous command:
kubectl exec -it test-deployment-5f6bd5b4f-xrvzc -- /bin/sh
In a pod shell run:
env
You should see all env variables printed as here:
KUBERNETES_PORT=tcp://172.20.0.1:443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=test-deployment-5f6bd5b4f-xrvzc
HOME=/root
.......
PWD=/
DATABASE_PASSWORD=12345
As you can see our DATABASE_PASSWORD
value is set to a value we passed to Secrets Manager. Let's check if it is mounted to file file system. Run:
ls /var/run/secrets/db-secret
You should see two files, similar to this:
MySecretPassword arn:aws:secretsmanager:region:888888888:secret:db-secret4-igOhxM
Run:
cat /var/run/secrets/db-secret/MySecretPassword
You should see same value that we declared. If you want a json output you can use cat command for the other file in your folder
You can find source code in our Github repo