From what I search online, there are 2 most common policies used in cache access:
- Cache aside
- Read through + Write through
But there are some other policies like:
- For write hit case:
- Write through
- Write back (Write behind)
- For write miss case:
- Write allocate
- No write allocate (write around)
My questions are:
- Why cache aside and read through + write through are most commonly used?
- Is cache aside only referring to cache read policy (.i.e. only read case, comparable to read through)? Is it possible to use cache aside with other write policy, e.g. cache aside + write through? .i.e. when application has a write miss, it writes to both cache and data store.
Reference:
- Read-through/Write-through vs Cache-Aside
- Write policies
3
The policies you’re citing concern themselves primarily with what has responsibility for maintaining the cache and deciding when and how to read data from the main memory.
In Cache Aside, the application assumes the responsibility. The application will look at the cache and see if it has the data it needs. If the cache doesn’t contain the required data, the application will read the data and copy the data to the cache.
In Read-through/Write-through caching, the application communicates with the cache directly. It is the cache’s responsibility to determine whether or not it already contains the data. If the cache doesn’t contain the data, it will get the data from main memory, retain a copy of the data, and return the data to the application.
There are two ways you can write data: Write-through and Write-Behind. Write-through means that the data is written to main memory immediately. It doesn’t provide any speed improvement, but is safer. Write-behind means that data is written to the cache, but only written back to main memory after a certain period of time, or when the block is evicted from the cache.
If you reflect on this a bit, you will see that there are advantages to dealing directly with the cache, and letting it have the responsibility of reading and writing main memory. If all memory access goes through the cache, the cache can guarantee data integrity, because it has knowledge of every main memory access. In other words, there is no possibility of getting stale data.
As far as I can tell, there is no issue of popularity. Each approach to caching has its advantages and disadvantages. If there is a strategy that is observed more often than the others, it is because that strategy more commonly satisfies the programmer’s objectives.