You can find source code in our GitHub repo here
In this article we would create a Kubernetes Cluster in Digital Ocean using Terraform
Prerequisites:
- DigitalOcean Account: You should have a DigitalOcean account. If you don't have one, you can sign up for a DigitalOcean account at DigitalOcean
- API Token: You need a DigitalOcean API token to authenticate with the DigitalOcean API. You can generate an API token by following the steps mentioned in the DigitalOcean control panel.
- Terraform installed - here
Provider.tf
As always we will start with providers.tf file in our project folder:
provider "digitalocean" {
token = var.pat_do_token
}
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = ">= 2.25.2"
}
}
}
Main.tf
We will create cluster with additional node pool for our example. Create main.tf file:
resource "random_id" "cluster_name" {
byte_length = 5
}
locals {
doks_cluster_name = "${var.env}-${var.cluster_name_prefix}-${random_id.cluster_name.hex}"
}
resource "digitalocean_kubernetes_cluster" "cluster" {
name = local.doks_cluster_name
region = var.region
version = var.cluster_version
ha = true
node_pool {
name = var.default_node_pool["name"]
size = var.default_node_pool["size"]
node_count = var.default_node_pool["node_count"]
auto_scale = var.default_node_pool["auto_scale"]
min_nodes = var.default_node_pool["min_nodes"]
max_nodes = var.default_node_pool["max_nodes"]
}
}
resource "digitalocean_kubernetes_node_pool" "cluster_extra_node_pool" {
cluster_id = digitalocean_kubernetes_cluster.cluster.id
for_each = var.additional_node_pools
name = each.value.name
size = each.value.size
node_count = each.value.node_count
auto_scale = each.value.auto_scale
min_nodes = each.value.min_nodes
max_nodes = each.value.max_nodes
}
- resource
"random_id" "cluster_name"
: This resource generates a random string of bytes with a length of 5 characters. It's used to create a unique identifier for the DOKS cluster name. locals
: This section defines a local variable nameddoks_cluster_name
. It combines various variables and the random ID generated in the previous step to form a unique name for the DOKS cluster.- resource
"digitalocean_kubernetes_cluster" "cluster"
: This is the main resource for creating the DOKS cluster. It includes the following key parameters: - name: The name of the cluster, which is set to the unique name generated in the locals block.
- region: The DigitalOcean region where the cluster will be created.
- version: The Kubernetes version to use in the cluster.
- ha: A boolean flag indicating whether to create a highly available cluster.
- default node pool: is defined by the var.doks_default_node_pool variable.
- resource
"digitalocean_kubernetes_node_pool" "cluster_extra_node_pool"
: This resource allows you to define additional node pools that can be attached to the cluster. The node pools are defined using a for_each loop based on the doks_additional_node_pools variable.
To check available Kubernetes versions run:
doctl k8s options versions
You will have output like this, we need to use Slug
value in our code
Slug Kubernetes Version Supported Features
1.28.2-do.0 1.28.2 cluster-autoscaler, docr-integration, ha-control-plane, token-authentication
1.27.6-do.0 1.27.6 cluster-autoscaler, docr-integration, ha-control-plane, token-authentication
1.26.9-do.0 1.26.9 cluster-autoscaler, docr-integration, ha-control-plane, token-authentication
Variables.tf
Now let's declare our variables. Create variables.tf
file:
variable "pat_do_token" {
description = "Personal Access Token to access the DigtialOcean API"
default = "<YOUR_DO_TOKEN>"
}
variable "cluster_name_prefix" {
type = string
default = "k8s"
description = "DOKS cluster name prefix value (a random suffix is appended automatically)"
}
variable "env" {
type = string
default = "test"
description = "DOKS Kubernetes environment"
}
variable "cluster_version" {
type = string
default = "1.28.2-do.0"
description = "DOKS Kubernetes version"
}
variable "region" {
type = string
default = "syd1"
description = "DO region name"
}
variable "default_node_pool" {
type = map(any)
default = {
name = "default-pool"
node_count = 1
size = "s-2vcpu-4gb"
auto_scale = true
min_nodes = 1
max_nodes = 5
}
description = "DOKS cluster default node pool configuration"
}
variable "additional_node_pools" {
type = map(object({
name = string
node_count = number
size = string
auto_scale = bool
min_nodes = number
max_nodes = number
}))
default = {
additional-pool-1 = {
name = "additional-pool-1"
node_count = 1
size = "s-2vcpu-4gb"
auto_scale = true
min_nodes = 1
max_nodes = 5
}
# additional-pool-2 = {
# name = "additional-pool-2"
# node_count = 2
# size = "s-4vcpu-8gb"
# auto_scale = true
# min_nodes = 2
# max_nodes = 10
# }
# Add more node pool configurations as needed
}
description = "DOKS cluster extra node pool configurations"
}
Please note: It's better approach to use environmental or input variable to provide your DO Token
Outputs.tf
Let's provide some outputs:
output "cluster_id" {
value = digitalocean_kubernetes_cluster.cluster.id
}
output "cluster_name" {
value = digitalocean_kubernetes_cluster.cluster.name
}
output "update_config" {
value = "doctl kubernetes cluster kubeconfig save ${digitalocean_kubernetes_cluster.cluster.name}"
}
update_config
- gives us command to update kubeconfig file to connect to the cluster
Deployment
In order to initialize terraform and download modules run:
`terraform init`
You can also check which resources terraform is planning to create by running:
terraform plan
To provision resources run:
terraform apply
After terraform applied you will see similar output:
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
Outputs:
cluster_id = "8ee4c5a9-3e11-4a9e-818c-3aafe8fa0b97"
cluster_name = "test-k8s-a8f9444b02"
update_config = "doctl kubernetes cluster kubeconfig save test-k8s-a8f9444b02"
Verify conectivity to the cluster
First execute the command from update_config
output in order to generate kubeconfig file:
doctl kubernetes cluster kubeconfig save test-k8s-a8f9444b02
Verify conectivity to the cluster with kubectl:
kubectl get no
You should see list of nodes:
NAME STATUS ROLES AGE VERSION
additional-pool-1-xn25o Ready <none> 49s v1.28.2
default-pool-xn2pv Ready <none> 3m45s v1.28.2
You can find source code in our GitHub repo here