I reached a design/architectural issue on the large application I’m working on, at the point of writing possibles solutions on a chalkboard. Our product consist of a client-server information system whose implementation was built on REST paradigms.
A particular use case defines the use of multiple resources (most of them available only server-side) in order to process data into actual information, before being sent to the client-side. You can think of it as a massive processing of a queried resources to present them in a certain order. It’s almost like a huge sorting algorithm.
Due to the fact that most of the resources used by the algorithm are available only server-side (because they don’t fulfill any other purpose on the client) and the fact that massive processing power is needed to perform it, it’s not practical to reproduce this client-side. Also to prevent possible reverse-engineering.
To fulfill a requirement we need to implement local storage on the client-side application, which means we would need to present this sorted information even if no network is available. In order to present them, we are considering the possibility of duplicating the database table for this resource, into a table that will be used only to store already sorted results.
To this point, what are the advantages and disadvantages of this approach, and what best practices are going to be needed into consideration. I’m also open to any design pattern, used to solve this problem, that I’m not aware of.
7