I’m stuck on what the approach should be for caching serialized objects as part of search queries. Right now, the serializers are performing extra queries and even with prefetch_related, a typical query is taking around 40 seconds to complete.
trips = (
Trip.objects.filter(filter_query)
.prefetch_related(
"trip_values",
"trip_values__operator_trip_field",
"trip_values__operator_trip_field__trip_field",
"operator_service_day",
"trip_occurrences",
"trip_locations",
"assigned_vehicle_formation",
)
.all()
query_time = time.time() - search_start
print("Query time: ", query_time)
serialization_start = time.time()
serialized = TripListSerializer(trips, many=True)
serialization_time = time.time() - serialization_start
print("Serialization time: ", serialization_time)
return Response(serialized.data)
So, if I wanted to cache each object instead, I would have to perform the same query as above, but I would then need to loop over each id
and fetch it’s cache and then combine the cache. So if i have 200+ search results, that’s 200+ cache lookups.
Is that really the approach I’m suppose to go or is there a better alternative.