I am currently looking for ways to optimize an application that is consuming unacceptable amounts of memory.
The process basically takes input and parses it into a tree. The tree is currently stored in memory. For an average input, it may consume 100 MB to 2 GB of memory. For larger inputs, it may consume up to 6 GB, which is more than what most of our machines can handle if we wanted to test it locally. We are afraid there will be even larger inputs in the future, and so it’s time to find a more scalable solution.
Rather than storing everything in memory, I would like to move the tree, or parts of the tree, into external storage. Benchmarking the effects of not storing certain parts of the tree in memory has resulted in a 90% reduction in memory consumption.
We have a framework (well, just our own library) that allows us to work with the tree using a set of standard getter and accessor methods. We use standardized methods for creating, reading, updating, and deleting nodes.
Our parsers typically interact directly with nodes. For example, a sample method might say
// Takes some data and updates the tree with new data
private void managePerson(String id) {
// this returns a node with the given ID, or creates one if it doesn't exist yet
// and returns a reference to it
NodePerson node = getPerson(id)
// add or update some attributes
node.age.set(25)
node.name.set("John")
// add some points to the existing value
int points = node.points.get()
node.points.set(points + 5)
}
Given this information, I am thinking of using a database. However, I would also like to be able to implement the changes transparently so as not to affect a decade’s worth of code that relies on the tree.
What can I use that will allow me to get data out into a database while allowing developers to get a reference to an object (eg: a person node) and treat it like any other object without any knowledge of where and how the tree is stored internally?
It would also be nice if the solution tried to optimize the amount of queries that are made. For example, instead of inserting 10000 person nodes one at a time, it would somehow figure out a way to batch insert it.
This is meant to be temporary storage: once the parsing is complete, all of the data can be discarded as we are serializing the data in a file (again, based on practices that have been implemented years ago. This can be changed, but we don’t have plans for that yet)
2
You are basically having a problem most ORMs try to solve. You have an object model defined and you want to persist it in non-object way. I know NHibernate solves it by subclassing each class in the model and adding a specific functionality that allows it to track changes to this class. Then, when the session closes, those classes are queried for changes made and those changes are converted into SQL updates. This is called Unit Of Work pattern.
You could do the same. Define the model in a form of classes and then subclass those in the “persistence” module and add the persistence logic. But this kind of logic is usually extremely hard and tedious to program. And it requires changing the persistence model every time the model itself changes.
Like I said in my comment, It would be best if you somehow managed to parse the whole thing into a database and work with it directly there. It doesn’t even have to be a relational, it could be a NoSQL solution too. And many of those allow in-memory embedded access, so there is no need to worry about running a separate application or performance issues. To me, this would be most reasonable solution.