Summary: I cannot get Azure Container Apps to communicate with each other internally, without exposing the app ingress to public.
I get the blue, “Error 404 – This container app is stopped or does not exist” response from http requests. This is normal if external requests, but shouldn’t occur for internal requests within the container environment.
My real quesiton is – How do I debug this? There is very little information available.
I have a .NET aspire project successfully deploying to Azure Container Apps.
First container App is YARP reverse proxy (accepting public requests).
Second second container app – is ASP.NET web app (accepting only internal requests).
I use the default ACA networking, that is, I have not configured my own VNet integration.
If the internal app is configured with Ingress enabled ‘Accepting traffic from anywhere’ then everything works perfectly exactly as expected. I can connect to the backend both directly, but also through the proxy.
If restrict the internal app with whitelist IP addresses, then connecting from externally is works exactly as expected. That is, depending on whether I am whitelisted, I get either (a) RBAC error message with HTTP error code 403 or (b) successful connection.
I cannot, however, find an IP address that can be whitelisted to allow the proxy to connect to the backend.
Most importantly, what I get is the blue: Error 404 – This container app is stopped or does not exist.
I have played around with (and I believe I fully understand) the service discovery going on. The services is correctly resolving to the .internal. address. I tried random things like hard coding IP addres or using the external address to no available.
Ive tried many combinations of HTTP and HTTPS, disabling HTTPS redirection etc. I can make different errors occur (like SSL handshaking) but I cannot get around this one.
I am not using any integrated authentication.
I can see from the YARP logs that the service names are being resolved correctly the the correct DNS names.
Both container apps are definitely in the same azure container environment.
I’m starting to thing that there are bugs in the Envoy Proxy that microsoft is providing at the edge of the container environment. I’m guessing it uses the hostname in the HTTP request to infer which container instance to forward to, but none of this is documented and I don’t know how to debug it.
I am surprised that if I resolve the public and internal container service endpoint IP address’s, they both resolve to the same IP address. Although that might be because I’m not using the internal DNS servers to do that resolution.