I have a java service deployed in k8s with a opentelemetry collector attached as sidecar to export the metrics from the application.
I have configured prometheus to scrape the pod via a ServiceMonitor in the same namespace.
All seems to be good , but when I look in the logs I see the following error.
“metric XXX was collected before with the same name and label values”
2024-05-30T14:42:12.390Z error [email protected]/log.go:23 error gathering metrics: collected metric "http_client_duration" { label:{name:"container_id" value:"9fc9d223e2e87e1ac60e3c02fe8430f865fbb479adb0f8540cbd1aafaf5fd721"} label:{name:"host_arch" value:"amd64"} label:{name:"host_name" value:"events-collector-57b85cc576-vh7br"} label:{name:"http_method" value:"GET"} label:{name:"http_status_code" value:"200"} label:{name:"job" value:"events-collector"} label:{name:"k8s_container_name" value:"events-collector"} label:{name:"k8s_deployment_name" value:"events-collector"} label:{name:"k8s_namespace_name" value:"platform"} label:{name:"k8s_node_name" value:"aks-aerasvc3-35271952-vmss000004"} label:{name:"k8s_pod_name" value:"events-collector-57b85cc576-vh7br"} label:{name:"k8s_replicaset_name" value:"events-collector-57b85cc576"} label:{name:"net_peer_name" value:"config-server-svc.central-services"} label:{name:"net_protocol_name" value:"http"} label:{name:"net_protocol_version" value:"1.1"} label:{name:"os_description" value:"Linux 5.15.0-1051-azure"} label:{name:"os_type" value:"linux"} label:{name:"service_name" value:"events-collector"} label:{name:"service_version" value:"2.6.0-main-b141"} label:{name:"telemetry_auto_version" value:"1.32.1"} label:{name:"telemetry_sdk_language" value:"java"} label:{name:"telemetry_sdk_name" value:"opentelemetry"} label:{name:"telemetry_sdk_version" value:"1.34.1"} histogram:{sample_count:1 sample_sum:117.409334 bucket:{cumulative_count:0 upper_bound:0} bucket:{cumulative_count:0 upper_bound:5} bucket:{cumulative_count:0 upper_bound:10} bucket:{cumulative_count:0 upper_bound:25} bucket:{cumulative_count:0 upper_bound:50} bucket:{cumulative_count:0 upper_bound:75} bucket:{cumulative_count:0 upper_bound:100} bucket:{cumulative_count:1 upper_bound:250} bucket:{cumulative_count:1 upper_bound:500} bucket:{cumulative_count:1 upper_bound:750} bucket:{cumulative_count:1 upper_bound:1000} bucket:{cumulative_count:1 upper_bound:2500} bucket:{cumulative_count:1 upper_bound:5000} bucket:{cumulative_count:1 upper_bound:7500} bucket:{cumulative_count:1 upper_bound:10000}}} was collected before with the same name and label values
{"kind": "exporter", "data_type": "metrics", "name": "prometheus"}
github.com/open-telemetry/opentelemetry-collector-contrib/exporter/prometheusexporter.(*promLogger).Println
github.com/open-telemetry/opentelemetry-collector-contrib/exporter/[email protected]/log.go:23
github.com/prometheus/client_golang/prometheus/promhttp.HandlerForTransactional.func1
github.com/prometheus/[email protected]/prometheus/promhttp/http.go:144
net/http.HandlerFunc.ServeHTTP
net/http/server.go:2136
net/http.(*ServeMux).ServeHTTP
net/http/server.go:2514
go.opentelemetry.io/collector/config/confighttp.(*decompressor).ServeHTTP
go.opentelemetry.io/collector/config/[email protected]/compression.go:160
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*middleware).serveHTTP
go.opentelemetry.io/contrib/instrumentation/net/http/[email protected]/handler.go:225
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.NewMiddleware.func1.1
go.opentelemetry.io/contrib/instrumentation/net/http/[email protected]/handler.go:83
net/http.HandlerFunc.ServeHTTP
net/http/server.go:2136
go.opentelemetry.io/collector/config/confighttp.(*clientInfoHandler).ServeHTTP
go.opentelemetry.io/collector/config/[email protected]/clientinfohandler.go:26
net/http.serverHandler.ServeHTTP
net/http/server.go:2938
net/http.(*conn).serve
net/http/server.go:2009
I tried almost everything but nothing seems to fix it. From the error it seems that I have an exact same metrics that was collected before and because of that its failing do collect it again.
Any idea where I can look ? maybe at prometheus side somewhere ? Or is there a way I can instruct Prometheus to ignore this duplicate and just go fwd with the scrape ?
Thanks for the help
Mircea