I am unsure regarding the scalability of the following model. I have little experience with large systems, big number of requests and so on but I’m trying to build some features considering scalability first.
In my scenario there is a user page which contains data for:
- User’s details (name, location, workplace …)
- User’s activity (blog posts, comments…)
- User statistics (rating, number of friends…)
In order to show all this on the same page, for a request there will be at least 3 different database queries on the back-end. In some cases, I imagine that those queries will be running quite a wile, therefore the user experience may suffer while waiting between requests.
This is why I decided to run only step 1 (User’s details) as a normal request. After the response is received, two ajax requests are sent for steps 2 and 3. When those responses are received, I only place the data in the destined wrappers.
For me at least this makes more sense. However there are 3 requests instead of one for every user page view.
Will this affect the system on the long term? I’m assuming that this kind of approach requires more resources but is this trade of UX for resources a good dial or should I stick to one plain big request?
2
Basically the speed is even less so no that does not really work out. You now still have the 3 queries which + overhead of 3 requests. Though page might build faster since it does not have to wait for all to complete.
If you don’t like the speed don’t abuse this ajax technique since it won’t change your issue.
Instead cache the data and show the cache. You could even show an older cache and then load the most recent version with ajax if there is new data. Then the user has the page complete and most recent ones are added later after page is shown.
That request directly updates your cache so you have best of both worlds.
In absolute speed, this solution would be slower. However, in perceived speed it can be faster.
Users don’t read the whole page all at once, they start by reading what they’re most interested in, then may switch to other parts of the page. Which means the user, in most cases, won’t even realize some parts of the screen aren’t fully loaded yet.
In your case, presumably :
- User details will be read first if the user is unknown, activity will be read first if the user is known
- Statistics will be read last (if at all)
So you you can improve perceived speed by showing either user details or activity or both, and loading statistics afterwards.
My company is headed in the direction of serving up anonymous pages with javacript that goes and retrieves the user-specific information and then updates the page.
If your site goes through a heavy boot strap process for the page and each of the sections, you are quadrupaling the server load. If you can bypass the bootstrapping waste, you’ll be fine. caching is a good way to do that, but you can do that with or without breaking your page up into little bits.
regardless, if you have slow queries (which seems to be your impetus), consider some query optimizations, some caching, and spinners! if a page load is expected to be slow, show a spinner with the message “loading your data….”. If you can load a header & footer with this friendly message, then show it all at once, or as sections come in, the users don’t mind as much. It can even make them feel important.
if you don’t have caching or query optimizations, i would start there. once you have a good caching mechanism, you’ll be able to use it anywhere, and that will help a lot. breaking a page up into multiple requests is a similar chunk of work, but only helps the one page, and the user still has to wait as long as before.