I observed the following cAdvisor memory metrics for a container deployed in Kubernetes Service on Azure and am struggling to come up with an explanation. It’s a .NET 6 program running in the container by itself and there’s only 1 container in the pod.
Metric | Size |
---|---|
container_memory_usage_bytes |
3247980544 (3097 MB) |
container_memory_working_set_bytes |
3234648064 (3084 MB) |
container_memory_rss |
336674816 (321 MB) |
container_memory_cache |
14274560 (13 MB) |
Basically the total memory usage and WSS are both very high (~3 GB), but RSS and page cache usage are both quite low (~300 MB and ~13 MB respectively). These stats were more or less constant for a long time (i.e. they are not short-lived spikes).
My very crude understanding of these metrics and Linux’s memory management is that RSS is basically the amount of memory actually allocated by the program (e.g. stacks, heaps and stuff that the typical app developers are familiar with). WSS should basically be RSS + active page cache (file cache that Linux considers to be in active use). And container_memory_cache
is basically active page cache + inactive page cache (file cache that Linux considers to be inactive and might be evicted first in case of memory pressure).
So I can’t see how WSS (3 GB) can be so much larger than RSS + page cache (300 MB + 13 MB). What’s likely taking up all the memory space as indicated by WSS metrics? I read somewhere that WSS also includes kernel memory but that can’t possibly take up 2.7 GB?. My program is a regular ASP.NET Core WebAPI and both the container_memory_mapped_file
and container_memory_swap
metrics are zero throughout the period.
Appreciate if someone can point out if my understanding is wrong and how to possibly explain these metrics. Thanks a lot!