What is a better approach when it comes to performance and optimal resource utilization: accessing a database multiple times through AJAX to only get the exact information needed when it is needed, or performing one access to retrieve an object that holds all information that might be needed, with a high probability that not all is actually needed?
I know how to benchmark the actual queries, but I don’t know how to test what is best when it comes to database performance when thousands of users are accessing the database simultaneously and how connection pooling comes into play.
6
There is no one correct answer to this; like any optimization it depends heavily on context / usage.
However, consider the following as a rule of thumb:
x
+: Data is stable / static
-: Data is dynamic / volatile
y
+: Data is frequently used
-: Data is infrequently used
++: fetch large chunks in the fewest number of fetches
and persist the data as long as possible within tolerances for staleness.
+-: do what is expedient to the logic & usage; if it is convenient to
fetch / calc as needed do so, if it is convenient to pre-fetch and
persist then do so. Seek to optimize only if absolutely necessary.
-+: fetch / calc as needed; but if optimization is required consider
pre-fetching or pre-calculating if possible, or negotiate a tolerance
for less than real time accuracy to reduce volatility.
--: fetch / calc as needed and don't worry about it further unless a
specific case is unacceptably expensive; if so see -+.
Remember the first rule of optimization: measure, don’t guess. Try both, instrument them with some sort of stopwatch code, and see what takes longer.
And also bear in mind the old joke that “there are only two difficult problems in computer science: cache invalidation and naming things well.” If you pull everything from the DB at once and hold it in memory, you have a cache. And now you have a new problem: anytime anything changes anywhere in the system, it has to make the same change in two places: the database, and the cache. If you have more than one server talking to the DB, or multiple APIs to make the server modify data, this can become very tricky very quickly.
1
There is NO silver bullet solution to this question. I guess you need to TRY the possible trade-offs and tune-up your server(s) to achieve the best out of it.
First point: before starting to do any improvements you need to SET your current performance benchmark, measuring it, and take it a baseline in comparison possible solutions to improve it.
The Second thing is, application usage needs to be tracked. The way how the application is utilized by end users. Cutting down on returned data raw numbers that is not needed to the end user(s) may save you a lot precious server resources. For example: there is no point in returning 5000 records while users are interested in first 50.
Third point: You have to understand the frequency of calls and possible implications. For example: if most calls are look-up value table queries, then you might probably would create an infrastructure to cache these calls. In another words, if your data is not changing frequently, consider the caching option. And of course, minimizing the number of calls always should help to boost performance.
Getting everything at once will give you better performance, unless “everything” includes things like BLOBs or similarly large data objects. The performance overhead to serialize everything, move it over the wire, then deserialize it on the other end is pretty significant, with network latency being a big chunk of it. Memory is cheaper than network bandwidth, and will probably remain so for awhile yet. Your only real answer will come from a benchmark, but if you’re just trying to gauge one over the other, that’s the way I’d lean.
2
If you’re making architectural decision, then REST is one option. With REST, you always request for a resource multiple times, i.e. you don’t send a request to get 2 objects because each object has its own url.
The performance concern with doing it this style will probably be solved when HTTP/2.0 come out. Otherwise, you just optimise to make it as fast as possible. A lot of companies are doing it this way.