I’m encountering an issue when trying to create an EKS node group using Terraform. The error I receive is:
<code>Error: waiting for EKS Node Group (my-cluster:my-node-group) create: unexpected state 'CREATE_FAILED', wanted target 'ACTIVE'. last error: i-06f4cfada8dede845, i-08057757840dbef04: NodeCreationFailure: Instances failed to join the kubernetes cluster
│
│ with aws_eks_node_group.my_node_group,
│ on main.tf line 263, in resource "aws_eks_node_group" "my_node_group":
│ 263: resource "aws_eks_node_group" "my_node_group" {
</code>
<code>Error: waiting for EKS Node Group (my-cluster:my-node-group) create: unexpected state 'CREATE_FAILED', wanted target 'ACTIVE'. last error: i-06f4cfada8dede845, i-08057757840dbef04: NodeCreationFailure: Instances failed to join the kubernetes cluster
│
│ with aws_eks_node_group.my_node_group,
│ on main.tf line 263, in resource "aws_eks_node_group" "my_node_group":
│ 263: resource "aws_eks_node_group" "my_node_group" {
</code>
Error: waiting for EKS Node Group (my-cluster:my-node-group) create: unexpected state 'CREATE_FAILED', wanted target 'ACTIVE'. last error: i-06f4cfada8dede845, i-08057757840dbef04: NodeCreationFailure: Instances failed to join the kubernetes cluster
│
│ with aws_eks_node_group.my_node_group,
│ on main.tf line 263, in resource "aws_eks_node_group" "my_node_group":
│ 263: resource "aws_eks_node_group" "my_node_group" {
Here is my Terraform code:
<code>terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.48.0"
}
}
}
# Define AWS region
provider "aws" {
region = "ap-south-1" # Replace with your desired region
}
variable "source_cidr_block" {
description = "CIDR block for allowed inbound traffic"
type = string
default = "0.0.0.0/0" # Default value, replace with your desired CIDR block
}
variable "cluster_sg_name" {
type = string
default = "uptime_cluster_sg_name"
}
# Create AWS VPC
resource "aws_vpc" "uptime_cluster_vpc" {
cidr_block = "172.240.0.0/16"
}
# Create AWS Security Group for Kubernetes cluster nodes
resource "aws_security_group" "sg-123abc" {
name = var.cluster_sg_name
vpc_id = aws_vpc.uptime_cluster_vpc.id
ingress {
from_port = 6443
to_port = 6443
protocol = "tcp"
cidr_blocks = [var.source_cidr_block]
}
}
# Create AWS Subnets for Kubernetes cluster
resource "aws_subnet" "subnet_abc123" {
vpc_id = aws_vpc.uptime_cluster_vpc.id
cidr_block = "172.240.0.0/17"
availability_zone = "ap-south-1a"
}
resource "aws_subnet" "subnet_def456" {
vpc_id = aws_vpc.uptime_cluster_vpc.id
cidr_block = "172.240.128.0/26"
availability_zone = "ap-south-1b"
}
resource "aws_eks_cluster" "my_cluster" {
name = "my-cluster"
role_arn = aws_iam_role.my_eks_cluster_role.arn
vpc_config {
subnet_ids = [aws_subnet.subnet_abc123.id, aws_subnet.subnet_def456.id] # Specify your subnet IDs
security_group_ids = [aws_security_group.sg-123abc.id] # Specify your security group ID
endpoint_public_access = true
endpoint_private_access = true
}
tags = {
Environment = "Test"
}
}
resource "aws_iam_role" "my_eks_cluster_role" {
name = "my-eks-cluster-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_eks_node_group" "my_node_group" {
cluster_name = aws_eks_cluster.my_cluster.name
node_group_name = "my-node-group"
node_role_arn = aws_iam_role.my_node_group_role.arn
scaling_config {
desired_size = 2
max_size = 3
min_size = 1
}
instance_types = ["t2.micro"] # Specify your desired instance type
subnet_ids = [aws_subnet.subnet_abc123.id, aws_subnet.subnet_def456.id] # Specify your subnet IDs
tags = {
Environment = "Test"
}
}
resource "aws_iam_role" "my_node_group_role" {
name = "my-node-group-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
</code>
<code>terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.48.0"
}
}
}
# Define AWS region
provider "aws" {
region = "ap-south-1" # Replace with your desired region
}
variable "source_cidr_block" {
description = "CIDR block for allowed inbound traffic"
type = string
default = "0.0.0.0/0" # Default value, replace with your desired CIDR block
}
variable "cluster_sg_name" {
type = string
default = "uptime_cluster_sg_name"
}
# Create AWS VPC
resource "aws_vpc" "uptime_cluster_vpc" {
cidr_block = "172.240.0.0/16"
}
# Create AWS Security Group for Kubernetes cluster nodes
resource "aws_security_group" "sg-123abc" {
name = var.cluster_sg_name
vpc_id = aws_vpc.uptime_cluster_vpc.id
ingress {
from_port = 6443
to_port = 6443
protocol = "tcp"
cidr_blocks = [var.source_cidr_block]
}
}
# Create AWS Subnets for Kubernetes cluster
resource "aws_subnet" "subnet_abc123" {
vpc_id = aws_vpc.uptime_cluster_vpc.id
cidr_block = "172.240.0.0/17"
availability_zone = "ap-south-1a"
}
resource "aws_subnet" "subnet_def456" {
vpc_id = aws_vpc.uptime_cluster_vpc.id
cidr_block = "172.240.128.0/26"
availability_zone = "ap-south-1b"
}
resource "aws_eks_cluster" "my_cluster" {
name = "my-cluster"
role_arn = aws_iam_role.my_eks_cluster_role.arn
vpc_config {
subnet_ids = [aws_subnet.subnet_abc123.id, aws_subnet.subnet_def456.id] # Specify your subnet IDs
security_group_ids = [aws_security_group.sg-123abc.id] # Specify your security group ID
endpoint_public_access = true
endpoint_private_access = true
}
tags = {
Environment = "Test"
}
}
resource "aws_iam_role" "my_eks_cluster_role" {
name = "my-eks-cluster-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_eks_node_group" "my_node_group" {
cluster_name = aws_eks_cluster.my_cluster.name
node_group_name = "my-node-group"
node_role_arn = aws_iam_role.my_node_group_role.arn
scaling_config {
desired_size = 2
max_size = 3
min_size = 1
}
instance_types = ["t2.micro"] # Specify your desired instance type
subnet_ids = [aws_subnet.subnet_abc123.id, aws_subnet.subnet_def456.id] # Specify your subnet IDs
tags = {
Environment = "Test"
}
}
resource "aws_iam_role" "my_node_group_role" {
name = "my-node-group-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
</code>
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.48.0"
}
}
}
# Define AWS region
provider "aws" {
region = "ap-south-1" # Replace with your desired region
}
variable "source_cidr_block" {
description = "CIDR block for allowed inbound traffic"
type = string
default = "0.0.0.0/0" # Default value, replace with your desired CIDR block
}
variable "cluster_sg_name" {
type = string
default = "uptime_cluster_sg_name"
}
# Create AWS VPC
resource "aws_vpc" "uptime_cluster_vpc" {
cidr_block = "172.240.0.0/16"
}
# Create AWS Security Group for Kubernetes cluster nodes
resource "aws_security_group" "sg-123abc" {
name = var.cluster_sg_name
vpc_id = aws_vpc.uptime_cluster_vpc.id
ingress {
from_port = 6443
to_port = 6443
protocol = "tcp"
cidr_blocks = [var.source_cidr_block]
}
}
# Create AWS Subnets for Kubernetes cluster
resource "aws_subnet" "subnet_abc123" {
vpc_id = aws_vpc.uptime_cluster_vpc.id
cidr_block = "172.240.0.0/17"
availability_zone = "ap-south-1a"
}
resource "aws_subnet" "subnet_def456" {
vpc_id = aws_vpc.uptime_cluster_vpc.id
cidr_block = "172.240.128.0/26"
availability_zone = "ap-south-1b"
}
resource "aws_eks_cluster" "my_cluster" {
name = "my-cluster"
role_arn = aws_iam_role.my_eks_cluster_role.arn
vpc_config {
subnet_ids = [aws_subnet.subnet_abc123.id, aws_subnet.subnet_def456.id] # Specify your subnet IDs
security_group_ids = [aws_security_group.sg-123abc.id] # Specify your security group ID
endpoint_public_access = true
endpoint_private_access = true
}
tags = {
Environment = "Test"
}
}
resource "aws_iam_role" "my_eks_cluster_role" {
name = "my-eks-cluster-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_eks_node_group" "my_node_group" {
cluster_name = aws_eks_cluster.my_cluster.name
node_group_name = "my-node-group"
node_role_arn = aws_iam_role.my_node_group_role.arn
scaling_config {
desired_size = 2
max_size = 3
min_size = 1
}
instance_types = ["t2.micro"] # Specify your desired instance type
subnet_ids = [aws_subnet.subnet_abc123.id, aws_subnet.subnet_def456.id] # Specify your subnet IDs
tags = {
Environment = "Test"
}
}
resource "aws_iam_role" "my_node_group_role" {
name = "my-node-group-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
Any help would be appreciated!
- Verified that the subnet IDs and security group ID exist and are correct.
- Ensured that the IAM roles for the EKS cluster and node group have the necessary policies attached.
- Confirmed that the VPC and subnets are correctly configured.
- Despite these checks, the node instances fail to join the Kubernetes cluster. What could be causing this issue, and how can I resolve it?
New contributor
Ashutosh Kumar is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.