I am working on an app where multiple pieces are hosted on kubernetes. Namely a static web app and a backend service. The static web app accesses the web service and they are hosted in their own respective deployments with ClusterIP. In my team, we are the first ones in the organization to try kubernetes deployment. We do not have access to full-time kubernetes IT staff. My role is a developer.
The problem I am having right now is from the path access: If accessing the app from https://serviceurl/*subpath*, any sub-path will load the static web application (index.html) on that route, rather than from the root https://serviceurl. This is messing up a basic redirect we have configured the app with in order to get user credentials. On the redirect, the app will say: “Sorry, there is nothing at this address”, which is what Blazor is configured to do by default. The address will be https://serviceurl/authentication/authenticate?auth_token=authtokenbase64 for example. Instead of the app’s index.html being at https://serviceurl/index.html, the app’s index.html is https://serviceurl/authentication/index.html. It is looking for the route “authenticate”. I have also found that this issue is the same for another url like https://serviceurl/route1 where route1 was navigated to within the app, then the page is refreshed.
Here is an example of the pod’s nginx.conf:
events { }
http {
include mime.types;
server {
listen 80;
server_name $hostname;
location / {
root /usr/share/nginx/html;
try_files $uri$args $uri$args/ /index.html;
add_header Cache-Control 'max-age=86400';
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
index index.html;
}
}
The ingress yaml looks like so:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/add-base-url: 'true'
nginx.ingress.kubernetes.io/rewrite-target: /
generation: 6
managedFields:
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations: {}
f:spec:
f:tls: {}
manager: kubectl-client-side-apply
operation: Update
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:loadBalancer:
f:ingress: {}
manager: nginx-ingress-controller
operation: Update
subresource: status
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:field.cattle.io/publicEndpoints: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:kubernetes.io/ingress.class: {}
f:nginx.ingress.kubernetes.io/add-base-url: {}
f:nginx.ingress.kubernetes.io/rewrite-target: {}
f:spec:
f:rules: {}
manager: agent
operation: Update
name: ns-svc-ingress
namespace: svc-app-dev
resourceVersion: '619394850'
uid: 832d76d4-690d-43c7-9a5b-495a3042dd46
spec:
ingressClassName: nginx
rules:
- host: ns.service.int
http:
paths:
- backend:
service:
name: svc
port:
number: 80
path: /
pathType: Exact
tls:
- hosts:
- int
secretName: svc-secret
status:
loadBalancer:
ingress:
- ip: 10.#.##.##
I believe this issue is with the kubernetes ingress rules or the nginx configuration in the pod. I have been searching for other nginx configurations and I have been swapping out ingress annotations.