I have created an api management configured in internal mode.
According to the documentation, an internal load balancer must be used in this scenario to be able to communicate with the aks services:
The problem is that when I try to use the endpoint of the api management, I get the following error. (I am doing the test from a pod)
The steps I used to configure the api management:
- I have configured the api management in internal mode and I have chosen a subnet of the vnet where the aks is created
- I also added it in the custom domain section a custom domain called apim-gw
- Create an nsg, associate it with the apim subnet and add the inbound and outbound rules according to the documentation
In the Private DNS Zone:
- Within the private dns zone I have created a virtual network link that points to the vnet where the aks subnet is located
- I have configured a custom domain called apim-gw in the private dns zone with the private ip of the api management
I created an internal load balancer with the following command:
helm install nginx-ingress ingress-nginx/ingress-nginx
--namespace=controller
--set controller.service.annotations."service.beta.kubernetes.io/azure-load-balancer-internal"="true"
In the api that I upload to the api management, I put the private ip of the internal load balancer in the backend url.
Checking the ingress logs, when I do the curl, it appears that the API management request is being blocked by the balancer because it is not showing up in the logs.
I have placed rules of type any to discard the nsg but I get the same error.
Additionally, to test the configuration I made in the api management, I created an ingress with a public IP, I placed it in the backend url of the api that I upload and it works for me there.
Do I need to configure something additional?