I’m trying to load test an API whose response exposes some additional custom metrics and attributes which aren’t fully controllable from the request side but are known to correlate with performance: Along the lines of internal cache hit/miss (boolean), retrieved intermediate values that affected the overall servicing time (scalar), etc.
What would the recommended pattern(s) be to save this additional request-level information and be able to:
- Filter or compare latency charts/metrics by the other metric values
- View the distribution of the other non-latency metrics over the duration of the test
It looks like the context
parameter in the Request Event might be an option for my custom User
class to emit the data? But I’m not sure from the docs if this would actually allow slicing the results.
If analysis functionality in the locust UI itself is limited, maybe there’s a way to dump the data to a structured file like Parquet or etc that we could analyze offline – but in such a way that would minimize impact to the achievable throughput per worker process?