I am trying to create a sample .net core Linux application on elastic beanstalk with a Load Balancer set to an instance of 1 max(I only need it for SSL certificate and Route53). But when the Auto Scaling group is created, the EC2 instance attached to it has a Red healthcheck. And the environment build fails with the following errors:
Stack named ‘awseb-xxxxx-stack’ aborted operation. Current state: ‘CREATE_FAILED’ Reason: The following resource(s) failed to create: [AWSEBInstanceLaunchWaitCondition].
The EC2 instances failed to communicate with AWS Elastic Beanstalk, either because of configuration problems with the VPC or a failed EC2 instance. Check your VPC configuration and try launching the environment again.
This stack is a default stack with a default Linux x64 AMI chosen for a t3.micro instance(I am using AWS free tier). And the VPC instance is also a default one with 3 subnet masks(all 3 of them were chosen for the application and the load balancer).
I have tried to play around the security groups both for the EC2 instance and the Load Balancer instance, but to no avail. I think the issue is hiding in the Healthcheck of the EC2 instance which is weird, since I am using a Sample Application.
I have been googling and CHATGPTing my way through this for 2 days now, but I cannot find a definitive answer to this. The IAM Role associated with this stack has the AdminAccess on almost everything related to ElasticBeanstalk, EC2, ECS, RDS and the other technologies I am using. I am using the browser AWS Console, no powershell/command prompt or bash.
As you can see from the screenshot, the Target Group shows that a healthcheck has failed for the EC2 instance registered.
8
The problem was that I did not enable “Assign Public IP Address” when creating the Environment in EB Console. Without it the EC2 instances were unable to communicate which resulted in the Red health status of the Target Group.