I have an app where I need two different deployment:
- User Requests
- Caching
I set up different Deployments / Services with the names myapp-user-deployment
and myapp-cache-deployment
/ myapp-user-service
and myapp-cache-service
and set up this Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
field.cattle.io/description: Routing ${APP_NAME} port ${APP_PORT} to ${K8S_INGRESS_HOST} port 80
nginx.ingress.kubernetes.io/proxy-connect-timeout: '300'
nginx.ingress.kubernetes.io/proxy-read-timeout: '300'
nginx.ingress.kubernetes.io/proxy-send-timeout: '300'
nginx.ingress.kubernetes.io/server-snippet: keepalive_timeout 300;client_body_timeout 300;client_header_timeout 300;
name: ${K8S_NAMESPACE}-ingress
namespace: ${K8S_NAMESPACE}
spec:
ingressClassName: nginx
rules:
- host: ${K8S_INGRESS_HOST}
http:
paths:
- path: /cache
pathType: Prefix
backend:
service:
name: ${K8S_NAMESPACE}-cache-service
port:
number: ${APP_PORT}
- path: /user
pathType: Prefix
backend:
service:
name: ${K8S_NAMESPACE}-user-service
port:
number: ${APP_PORT}
This works perfectly. However, I also have other requests, where I need both services. I added a “combined” service with a ConfigMap that will load balance for the prefix /
.
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
field.cattle.io/description: Routing ${APP_NAME} port ${APP_PORT} to ${K8S_INGRESS_HOST} port 80
nginx.ingress.kubernetes.io/proxy-connect-timeout: '300'
nginx.ingress.kubernetes.io/proxy-read-timeout: '300'
nginx.ingress.kubernetes.io/proxy-send-timeout: '300'
nginx.ingress.kubernetes.io/server-snippet: keepalive_timeout 300;client_body_timeout 300;client_header_timeout 300;
name: ${K8S_NAMESPACE}-ingress
namespace: ${K8S_NAMESPACE}
spec:
ingressClassName: nginx
rules:
- host: ${K8S_INGRESS_HOST}
http:
paths:
- path: /cache
pathType: Prefix
backend:
service:
name: ${K8S_NAMESPACE}-cache-service
port:
number: ${APP_PORT}
- path: /user
pathType: Prefix
backend:
service:
name: ${K8S_NAMESPACE}-user-service
port:
number: ${APP_PORT}
- path: /
pathType: Prefix
backend:
service:
name: ${K8S_NAMESPACE}-service
port:
number: ${APP_PORT}
Service
apiVersion: v1
kind: Service
metadata:
name: ${K8S_NAMESPACE}-service
namespace: ${K8S_NAMESPACE}
spec:
selector:
app: ${APP_NAME}
type: combined
ports:
- name: ${APP_NAME}-tcp
protocol: TCP
port: ${APP_PORT}
targetPort: ${APP_PORT}
type: ClusterIP
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${K8S_NAMESPACE}-deployment
namespace: ${K8S_NAMESPACE}
spec:
strategy:
rollingUpdate:
maxSurge: 50%
maxUnavailable: 50%
type: RollingUpdate
selector:
matchLabels:
app: ${APP_NAME}
type: combined
replicas: ${K8S_REPLICAS_MIN}
template:
metadata:
labels:
app: ${APP_NAME}
environment: ${NODE_ENV}
type: combined
spec:
containers:
- name: ${APP_NAME}-combined-proxy
image: nginx:1.26
resources:
limits:
cpu: "500m"
memory: "512Mi"
requests:
cpu: "250m"
memory: "128Mi"
ports:
- containerPort: ${APP_PORT}
env:
- name: K8S_NAMESPACE
value: ${K8S_NAMESPACE}
- name: APP_PORT
value: "${APP_PORT}"
startupProbe:
httpGet:
path: /services/about
port: ${APP_PORT}
failureThreshold: 30
initialDelaySeconds: 2
periodSeconds: 1
successThreshold: 1
timeoutSeconds: 5
livenessProbe:
httpGet:
path: /services/about
port: ${APP_PORT}
failureThreshold: 3
initialDelaySeconds: 1
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 10
volumeMounts:
- mountPath: /etc/nginx/nginx.conf
name: nginx-config
subPath: nginx.conf
nodeSelector:
purpose: "default"
volumes:
- name: nginx-config
configMap:
name: ${K8S_NAMESPACE}-combined-proxy-config
ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: ${K8S_NAMESPACE}-combined-proxy-config
namespace: ${K8S_NAMESPACE}
data:
nginx.conf: |
events {
worker_connections 1024;
}
http {
upstream backend {
server $K8S_NAMESPACE-cache-service.$K8S_NAMESPACE.svc.cluster.local:80;
server $K8S_NAMESPACE-user-service.$K8S_NAMESPACE.svc.cluster.local:80;
}
server {
listen $APP_PORT;
location / {
proxy_pass http://backend;
}
}
}
HorizontalPodAutoscaler
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: ${K8S_NAMESPACE}-autoscaler
namespace: ${K8S_NAMESPACE}
spec:
maxReplicas: 5
metrics:
- resource:
name: cpu
target:
averageUtilization: 250
type: Utilization
type: Resource
- resource:
name: memory
target:
averageUtilization: 80
type: Utilization
type: Resource
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: ${K8S_NAMESPACE}-deployment
My new issue is, that now the health-check /services/about
fails with 401
. When I remove the health check and go to /services/about
myself, a login pops up. This app doesn’t have any authentication implemented. How can I fix this or is there a easier way to do this?