Primer: We have a mobile app being served by an API (written in PHP). The main point of the app is to display products from a large items table in the database, in a multitude of different configurations. Categories, suggested items, user compiled feeds, “similar to” feeds. Basically most, if not all of the display data comes from a single table, however the filtering and sorting is really what the app does.
Some feeds are fairly simple such as “get all items that are in one of these categories using an items to category link table”. Others are more complex using fulltext search to filter, along with category filters and stock filters and sometimes we may even require that the results have an additional filter such as “exclude items from any category that doesn’t have 10 matching items in that category” (sort of like a self regulating, category filter invalidation).
We are now in the process of trying to speed up the API and looking to start putting the tech in, to allow us to scale. We’ve already decided on certain things like seperated search engines (i.e. elasticsearch for fulltext replacement e.c.t.). However one thing which i’m a little “new” in, is caching the API feeds.
My only caching experience has been either caching full webpage outputs in things like magento/wordpress. Or caching complex objects that are built from several queries and processes.
I’ve decided, object caching probably isn’t suitable here, as most of the final object data comes from a single table. So object caching would not provide much speed improvement.
I’m currently figuring i need to cache the final output collection of objects, as its really the complex filtering and sorting which slows everything down. But that’s where i get stuck…
First of all, should we cache the entire collection of objects, or just store a collection of the object id’s, and still go to the database to do the final data lookup (my theory being, that with all the different compilation of feeds, the memory usage would grow quite quickly caching all the objects themselves).
Secondly, do we cache the very final output of the feed, or do we try and find a common point at which we can create a cached copy of the feed that isnt quite right. So for example, rather than caching “similar products to X where the similar products are in 1,2,3 categories”, we would instead cache “similar products to X”, then on each request, get the collection from cache, and do the additional filtering manually after retrieving the “base collection”.
Hopefully that all makes some sort of sense,a nd apologies if its a little rambly. I’m a little out of my depth with this one, but im keen to learn. No doubt whatever solution i decide to implement first, will need many, many iterations before we deem it “correct”. But at the moment, i just don’t have any experience in this, to give myself a suitable, appropriate starting point, to start development.
Things like this tend to highly depend on both data structures and application design. Very much like with performance optimization in general, there is no ultimate answer that fits all cases. The best answer one can probably give you is to find out, what the exact bottlenecks are. Analyze that data and decide for a caching strategy that is most likely to avoid These problems. Implement it as “proof of concept”, then measure again. Repeat this until you are satisfied with the results.
And most important: Try it with a fairly representative amount of data that you expect later in production. Things usually go fast when only 10 database records are involved, no matter what you code. This is no longer true with 10.000 (or 10 mio. records).
2