I’m using ECS with an Auto Scaling group as a capacity provider to deploy tasks on EC2. I’ve set the task placement strategy to binpack based on memory. However, I’ve noticed that ECS always scales up a new EC2 instance to deploy a new task, even if an existing instance has enough CPU and memory available. Consequently, there is only one task per EC2 instance. I expect that all tasks should be placed on a single EC2 instance if it has sufficient memory.
Here are some actions I’ve already checked:
- Port conflict: The network mode is set to awsvpc in the ECS task definition, so each task gets its own ENI, which prevents port conflicts.
- EC2 storage: Each EC2 instance has a storage size of 30GB (EBS GP3). My container is a nginx-based web app with 1 MB of static files, so the storage is more than sufficient for running multiple containers.
The following configurations might be related to this issue,
Capacity provider configurations
capacity provider: autoscaling group
base: 0
weight: 100
target capacity: 100%
managed instance draining: true
managed instance scaling: true
scale in protection: false
ECS service configurations
desired count: 1
placement strategy:
type: binpack
field: memory
scheduling strategy: REPLICA
service connect:
enabled: true
namespace: my_namespace
services:
- port name: web
discovery name: hello
client aliases:
- port: 80
dns name: hello
ECS task configurations
network mode: awsvpc
container definitions:
- name: web
image: nginx
port mappings:
- name: web
container port: 80
protocol: tcp
Any insights or suggestions?