We are using FPM (FastCGI Process Manager) and Nginx in our setup, along with a memory repository to reduce the number of hits to Redis. However, we are encountering an issue where some foreach loops process many identical pieces of data, such as airline IATA codes in flights. For instance, if there are 100 items with the same IATA code, our system currently performs 100 separate hits to retrieve this data.
This approach works fine with FPM but presents challenges in the Supervisor worker container. We have observers set up to delete cache and memory keys after updating the database. The issue arises because the worker runs in a different container from the application, so memory keys are not deleted until the listener for ForgetMemoryCache is dispatched to the worker.
Given this setup, do you think it’s wise to proceed with this approach? What are your thoughts?
Iman Parvizi is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.