I am using Azure Managed Prometheus to scrape the metrics for the nginx application in my AKS cluster. Below is the scrape config defined.
prometheusConfig:
global:
scrape_interval: 30s
scrape_timeout: 10s
scrapeConfig:
- job_name: 'ingress-nginx-endpoints'
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- ingress-nginx
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::d+)?;(d+)
replacement: $1:$2
- source_labels: [__meta_kubernetes_service_name]
regex: prometheus-server
action: drop
- action: labelmap
regex: __meta_kubernetes_pod_annotation_prometheus_io_param_(.+)
replacement: __param_$1
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod
- source_labels: [__meta_kubernetes_pod_phase]
regex: Pending|Succeeded|Failed|Completed
action: drop
- source_labels: [__meta_kubernetes_pod_node_name]
action: replace
target_label: node
and I am trying to query the Prometheus continuosly with the query sum(rate(nginx_ingress_controller_requests{service="${appName}"}[2m])
, but sometimes I am getting the below data instead of some information
{
"status": "success",
"data": {
"resultType": "vector",
"result": []
}
}
Sometimes I do get the result. But I am worrying why it is not returning info continuously is not clear.
I tried reducing the scrape interval and increasing the scrape_timeout. But no luck. I want to understand the problem here clearly
4