I am trying to create a ECS cluster (fib-cluster-10) with min 1 instance, using the following settings under “infrastructure”:
the ecsInstanceRole has the following policy JSON:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeTags",
"ecs:CreateCluster",
"ecs:DeregisterContainerInstance",
"ecs:DiscoverPollEndpoint",
"ecs:Poll",
"ecs:RegisterContainerInstance",
"ecs:StartTelemetrySession",
"ecs:UpdateContainerInstancesState",
"ecs:Submit*",
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "ecs:TagResource",
"Resource": "*",
"Condition": {
"StringEquals": {
"ecs:CreateAction": [
"CreateCluster",
"RegisterContainerInstance"
]
}
}
}
]
}
I’ve created a VPC with 2 private subnets, 2 public subnets
(and a security group to allow all TCP on the private subnets).
When creating the cluster, I select the private subnets:
(the other optional stuff I left out).
When I hit create, I get 0 instances associated to the cluster:
(when I then try and deploy a TaskDefinition using a service, I get error saying my cluster has no instances)
1
Thanks a lot Mark B, that was the issue – didn’t have Route Table entries linking private subnets to to public ones via the NAT gateway. This isn’t very obvious (wish there was a aws article just going through this stuff, also how to set up ALB with ASG and target groups as it’s a complex setup)
Can see the instances now!