I deployed a single node minikube cluster with minikube start --vm-driver=none
means running on the bare metal which is a VM. I deploy my own custom scheduler and after running that scheduler I am facing this error
k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://172.2.X.X:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": x509: certificate signed by unknown authority
I looked in to this issue which have same issue approximately but I did not find the solution from it. I know that the error is due to Kube API not able to authorize the request from the scheduler? but I didnt change much on the default kube-schduler.yaml
file just add my image etc, but it gives error when I run mine schduelr. The kube-schduler.yaml is
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=false
- --master=https://172.22.174.151:8443
- --config=/etc/kubernetes/pid-config.yaml
- --v=4
image: localhost:5000/scheduler-plugins/kube-scheduler:latest
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10259
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-scheduler
resources:
requests:
cpu: 100m
startupProbe:
failureThreshold: 24
httpGet:
host: 127.0.0.1
path: /healthz
port: 10259
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
- mountPath: /etc/kubernetes/pid-config.yaml
name: pid-config
readOnly: true
env:
- name: KUBERNETES_MASTER
value: "https://172.2.X.X:8443"
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/kubernetes/scheduler.conf
type: FileOrCreate
name: kubeconfig
- hostPath:
path: /etc/kubernetes/pid-config.yaml
type: File
name: pid-config
status: {}
But when I curl the link it do accessable
curl --cacert /var/lib/minikube/certs/ca.crt -H "Authorization: Bearer $TOKEN" https://172.2.X.X:8443/api/v1/nodes -v
* Trying 172.2.X.X:8443...
* Connected to 172.2.X.X (172.2.X.X) port 8443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* CAfile: /var/lib/minikube/certs/ca.crt
* CApath: /etc/ssl/certs
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS header, Finished (20):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.2 (OUT), TLS header, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.3 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: O=system:masters; CN=minikube
* start date: Apr 28 10:05:25 2024 GMT
* expire date: Apr 29 10:05:25 2027 GMT
* subjectAltName: host "172.22.174.151" matched cert's IP address!
* issuer: CN=minikubeCA
* SSL certificate verify ok.
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* Using Stream ID: 1 (easy handle 0x60b3af5afeb0)
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
> GET /api/v1/nodes HTTP/2
> Host: 172.2.X.X:8443
> user-agent: curl/7.81.0
> accept: */*
> authorization: Bearer
>
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
< HTTP/2 403
< audit-id: a5478c3b-3e6d-4139-9f2a-0219b9834622
< cache-control: no-cache, private
< content-type: application/json
< x-content-type-options: nosniff
< x-kubernetes-pf-flowschema-uid: 12d224eb-22db-4478-8338-bbda45f8da9f
< x-kubernetes-pf-prioritylevel-uid: 3c8f01f4-c7e2-4dbe-bf72-29ad82e99dfc
< content-length: 297
< date: Wed, 01 May 2024 14:23:42 GMT
<
* TLSv1.2 (IN), TLS header, Supplemental data (23):
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "nodes is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope",
"reason": "Forbidden",
"details": {
"kind": "nodes"
},
"code": 403
* Connection #0 to host 172.2.x.x left intact
Anyone had face this error before, or any clue for the right direction to look at?
Thank you