This is probably a noob question but I haven’t been able to google streaming a big array of items from the server to the browser.
Let’s say I have a list of 10k (or even just 100) bank transactions that I must get into the browser for some reason. Naturally, I want to paginate this maybe loading 10 items each so that in case the client network is bad they don’t stare at a loading spinner for too long.
Which option is better?
-
Traditional Pagination: send a request, wait for 10 items to load, render them, send the next one. This is the easiest and most common. But we gotta wait for 10 items and 10 is kinda a magic number, if the network is really good, we might want to load 1k at a time, or if it’s really bad, then perhaps 5 items per page is better.
In the context of reactive programming, this seems like a waste, we stream the array from db but right before writing response stop and buffer it up?
-
Streaming Pagination: establish a connection, send a request message, parse and render 10 items as they come in one-by-one, send the next request, … , at the end close the connection. Same as above but now we don’t wait for 10 items! However if the connection is good, constant rendering might be overkill but we can throttle it on the client, so it shouldn’t be an issue.
-
Streaming: establish a connection, send an unbounded request, parse and render one-by-one all of them, close the connection. Same as above but we don’t need to keep sending the requests for “10 more”.
I left the protocols out of it as I don’t know much about them, but I am aware that there’s HTTP that’s good for #1, long-lived HTTP that’s good for #2 (but I don’t think any of my tools support it: apollo graphql client, spring graphql webflux – I know that’s how Next.JS streams the webpage in the same request as it loads), websockets that’s good for #2 and maybe #3, and rsocket that’s good for all 3 options.
I see a problem with #2 and #3 where the client can get overwhelmed with processing items but the connection is still open – is that a big deal? In case of #2 it will not request next until it is done processing the current. And in case of #3 it will need to tell the server to stop explicitly – with websockets, that would have to be a custom logic, and with rsocket, it would be naturally done via backpressure (if done via backpressure, will the server pause or will it keep reading the db and buffer until it runs out of memory?).
Let me know if there’s anything else to consider but it seems to me neither of these options are ideal but #1 is the easiest. The perfect solution would be lightweight end-to-end and the items would smoothly pop up in the UI as they come straight from the backend db.
1