For, e.g. determining an amount of memory that is safe to allocate for processing file or device with this type of I/O loop:
HANDLE hFile /* = file open with GENERIC_READ */;
LARGE_INTEGER liSize;
SIZE_T cbAlloc;
BYTE *pBuffer = NULL;
GetFileSizeEx(hFile, &liSize);
while(liSize.QuadPart)
{
cbAlloc = min(ALLOC_MAX, liSize.QuadPart);
/* Allocate pBuffer with cbAlloc bytes size,
Use pBuffer, then Free pBuffer... */
liSize.QuadPart -= cbAlloc;
}
Right now my ALLOC_MAX
is a macro constant, 1GB on 64-bit systems, and 128MB on 32-bit systems.
However, when researching a way to determine a safe amount to allocate dynamically, with e.g. GlobalMemoryStatusEx
, my next question is, how does even this information (including total physical memory available) even help me determine what the right amount is for ALLOC_MAX
? It would not be wise to allocate the exact amount of available memory, would it? What is a reasonable amount to try to allocate, three-quarters of the available amount? Half of it?
1
You should consider
-
which kind of target system(s) you are aiming for, and how much real memory they will have at minimum
-
what other kind of programs and processes will run at the same machine at the same time, or if your program is running on a box which is dedicated for your process exclusively
-
what you are hoping to gain from allocating 1GB at a whole – will there be any significant performance difference when you choose a much smaller buffer, lets say 1k, 10k, maybe 100k? (And don’t guess – measure!)
-
why ALLOC_MAX is a constant. Depending on the context, this choice might be fine, but maybe you want the memory consumption of the program configurable, for example, to adapt it to the specific machine or environment where the process will be executed.
The ideal choice for ALLOC_MAX is the smallest one which gives you acceptable performance – if you make it configurable, it will be easier to make some benchmarks in different environments without any need to recompile the program.
3
You measure. You hope that one 1GB read is faster than 1000 1MB reads. You are missing that you are using memory that might be needed elsewhere (so another app on your computer starts swapping), and your data doesn’t fit into L2 cache anymore, which can make a massive difference. So there are plenty reasons why a big size might be slower.
1