I am new to caching and trying to understand how it works in general. Below is code snippet from ServiceStack website.
public object Get(CachedCustomers request)
{
//Manually create the Unified Resource Name "urn:customers".
return base.RequestContext.ToOptimizedResultUsingCache(base.Cache, "urn:customers", () =>
{
//Resolve the service in order to get the customers.
using (var service = this.ResolveService<CustomersService>())
return service.Get(new Customers());
});
}
public object Get(CachedCustomerDetails request)
{
//Create the Unified Resource Name "urn:customerdetails:{id}".
var cacheKey = UrnId.Create<CustomerDetails>(request.Id);
return base.RequestContext.ToOptimizedResultUsingCache(base.Cache, cacheKey, () =>
{
using (var service = this.ResolveService<CustomerDetailsService>())
{
return service.Get(new CustomerDetails { Id = request.Id });
}
});
}
My doubts:
-
I’ve read that cached data is stored on same/distributed server RAM. So, how much data can it handle, suppose in first method if customers count is more than 1 million, doesn’t it occupy too much memory.
-
Ingeneral, do we apply caching only for GET operations and invalidate if it gets UPDATE’d.
-
Please suggest any tool to check memory consumption of caching.
Please read ServiceStack’s wiki page on Caching.
What memory gets used is entirely determined by the Caching provider used, e.g. if you use a distributed cache like Redis or Memcached the memory of the process of that daemon/service will grow to retain the cached data. If you use ServiceStack’s MemoryCachedClient (the Default) then caches are just kept in memory, i.e. in the ASP.NET w3wp.exe process.
How big the cache is, is directly effected by what is cached.
Your caching strategy you choose is independent of the caching providers, the example provided uses a caching pattern that lazily creates the cache when fetched and invalidates it when changed.