I am currently trying to improve performance of concurrent requests on my localhost server. My current setup is as follows:
OS: Windows 11
Server: Apache/2.4.58 (Win64)
PHP: 8.2.12
I noticed that I could only send 6 concurrent requests from my browser. As this seems to be a client-side limitation with HTTP/1.1, I’ve enabled HTTPS and HTTP/2 for my local Apache server. Accessing https://localhost/ works fine now, and the browser dev tools show Protocol: h2
.
In order to benchmark my setup I’ve create a small PHP script (sleep.php
) that sleeps for 1000 ms
and then simply returns status “200 OK”. For my benchmark I’ve called this script exactly 100 times, using multiple different testing methods:
1. Browser fetch()
using HTTP/1.1 (Baseline Test):
When running fetch('http://localhost/sleep.php')
in Chrome or Firefox, the request waterfall shows many blocked requests, that get unblocked in groups of 6. This is expected with HTTP/1.1, as stated above.
2. Browser fetch()
using HTTP/2:
This is were things get weird: When running fetch('https://localhost/sleep.php')
in Chrome or Firefox, the requests don’t get blocked like before, but they appear to get “stuck” during execution. When I call fetch()
100 times, the first 6 requests take ~1000 ms, the next 12 requests take ~2000ms, the next 24 requests take ~3000ms, the next 48 requests take ~4000ms, and the remaining requests take ~5000ms. The grouping of 6 → 12 → 24 → 48 → … seems especially weird to me.
3. Browser (axios)
I did the same test again, but this time I’ve used axios.get()
instead (which is using XMLHttpRequests
under the hood). Again, same results as with fetch()
.
4. Node.js fetch()
A completely different result is produced when using Node.js. All 100 requests finish in ~1000ms – ~1100ms. No delayed responses at all.
5. Apache Benchmarking Tool
This time I’ve executed abs.exe -n 100 -c 100 https://localhost/sleep.php
from the command line. This tool yields confusing results to me: Time per requests says ~2000ms, but the min/avg/max connection times are all between ~1000ms – 1100ms as well.
Conclusion
Benchmarking method | Total time for 100 requests |
---|---|
Browser, fetch, HTTP/1.1 | 17.13 s |
Browser, fetch, HTTP/2 | 5.06 s |
Browser, axios, HTTP/2 | 5.08 s |
Node.js, fetch, HTTP/2 | 1.13 s |
abs.exe, HTTP/2 | 2.04 s (?) |
Question
What is going on here? All I’ve found is that there should be no theoretical limit with HTTP2 requests. The server could throttle incoming requests (which I don’t do, as seen with Node.js). While e.g. Firefox has established a practical limit of 100 concurrent requests (network.http.http2.default-concurrent
), this still does not explain the 6 → 12 → 24 → 48 → … grouping, which seems to be happening on the server side for HTTP/2 (by looking at the request waterfall).
Is there a way to speed things up here, even when requests are coming from a browser?