I’m working on updating some code from a colleague that uses OpenSearch (the fork of Elasticsearch), and I want to make sure my understanding of the performance tradeoffs is accurate.
We have a list of names, with hundreds of thousands of entries, which we index in OS. Then, when given a new string, we want to see if that name is in that list of names, OR if any “similar names” are in the list. Similar names are things like “Cat Store”, “Cats Store”, “The Cats Store”, “Kitten Store”, etc. We already have code to produce all those variations, and there could be dozens to hundreds of them per name.
Currently, what the code is doing is indexing just the existing names, and then on each lookup, generating all the similar names and then querying the index to see if any of those names exist. (Basically the equivalent of a “WHERE IN (long list)” query from SQL.) That’s a huge number of terms, so sometimes we even run up against the parameter limit for OpenSearch.
This strikes me as a performance issue. It seems to me like it would be substantially faster to generate the list of similar words at index time, and save all 500 or whatever variations of “Cat Store” in the index, then when looking for “Cats Store” we can just search for that one term. Since they’ve been preloaded, OpenSearch can index/stem/whatever in advance, so the query should be quite fast. However, I am concerned about the amount of data that will result in, as hundreds of thousands of names * potentially hundreds of variations = many millions of possible names. (I don’t have hard numbers at the moment, unfortunately; these are approximations.)
Has anyone done something along these lines? Which performance trade-off is more likely to suck less? Any particular metrics by which to inform that decision?