It seems that Prometheus cannot scrape Linkerd proxy data. When I performed a linkerd viz check of the proxies, I found the following:
linkerd-viz-data-plane
----------------------
√ data plane namespace exists
√ prometheus is authorized to scrape data plane pods
‼ data plane proxy metrics are present in Prometheus
Data plane metrics not found for ...
I do not know how to debug and correct it. I’d appreciate your suggestions.
BACKGROUND:
Our cluster has linkerd2 deployed and injection to all pods in a set of namespaces is working. I have deployed Bitnami’s Grafana Operator, Kube Prometheus, and Thanos to our k8s cluster and they have not been added to Linkerd’s mesh. They are all working properly. Good.
My next step is to configure Prometheus to scrape Linkerd. When I deployed linkerd viz, I used the following command:
linkerd viz install --set prometheusUrl="http://prometheus-kube-prometheus-prometheus.monitoring.svc.cluster.local:9090",prometheus.enabled=false"
| kubectl apply -f -
Although I do not find any errors in the logs, Prometheus does not seem to be scraping data from Linkerd.
linkerd viz stat deployments -n linkerd
NAME MESHED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 TCP_CONN
linkerd-destination 1/1 - - - - - -
linkerd-identity 1/1 - - - - - -
linkerd-proxy-injector 1/1 - - - - - -
When I performed a linkerd viz check of the proxies, I found an error, but do not know how to correct it.
linkerd viz check –proxy
linkerd-viz
-----------
√ linkerd-viz Namespace exists
√ can initialize the client
√ linkerd-viz ClusterRoles exist
√ linkerd-viz ClusterRoleBindings exist
√ tap API server has valid cert
√ tap API server cert is valid for at least 60 days
√ tap API service is running
√ linkerd-viz pods are injected
√ viz extension pods are running
√ viz extension proxies are healthy
√ viz extension proxies are up-to-date
√ viz extension proxies and cli versions match
√ viz extension self-check
linkerd-viz-data-plane
----------------------
√ data plane namespace exists
√ prometheus is authorized to scrape data plane pods
‼ data plane proxy metrics are present in Prometheus
Data plane metrics not found for
emissary/emissary-ingress-545dfbfc5c-kngtb, abc/queue-service-6dbfd886c9-fhwps, linkerd-viz/metrics-api-7f577b5599-588kg, linkerd/linkerd-identity-86c4d9488c-6zwr2, emissary/emissary-ingress-agent-5846fb5848-ckhvs, emissary/emissary-ingress-545dfbfc5c-wkszb, linkerd/linkerd-destination-68bb4d99c8-hnvbw, linkerd-viz/tap-87b5df489-hzz68,linkerd-viz/web-667b9f46b8-t85hf, emissary/emissary-ingress-545dfbfc5c-zrqp6.
see https://linkerd.io/2/checks/#l5d-data-plane-prom for hints
Finally, here is the scrapping configuration that I applied to Prometheus via its Helm installation. I copied this from Prometheus Query tool: Status->Configuration:
- job_name: linkerd
honor_labels: true
honor_timestamps: true
track_timestamps_staleness: false
params:
match[]:
- '{job="linkerd-proxy"}'
- '{job="linkerd-controller"}'
scrape_interval: 30s
scrape_timeout: 10s
scrape_protocols:
- OpenMetricsText1.0.0
- OpenMetricsText0.0.1
- PrometheusText0.0.4
metrics_path: /federate
scheme: http
enable_compression: true
follow_redirects: true
enable_http2: true
http_headers: null
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_name]
separator: ;
regex: ^prometheus$
replacement: $1
action: keep
kubernetes_sd_configs:
- role: pod
kubeconfig_file: ""
follow_redirects: true
enable_http2: true
http_headers: null
namespaces:
own_namespace: false
names:
- linkerd-viz