I recently came across an algorithm design problem that I’d like some help with:
Given some letters, for example
a,e,o,g,z,k,l,j,w,n
and a dictionary of words. Find a word in the dictionary that has most letters.
My first attempt:
Let us assume that the dictionary is in a tree. Start by finding the permutations of the given letters, here we could be using recursion, we can prune the recursion tree, by checking the letters in the dictionary. and we maintain a variable which holds the largest string formed till now.
How is this solution? Is there a better one?
9
The best you can do is reduce the number of comparisons to the number of letters in the dictionary.
sought: a,e,o,g,z,k,l,j,w,n
-
make index of alphabet where sought keys have value 1, the rest: 0.
index={a:1, b:0, c:0, d:0, e:1, f:0, g:1...}
-
Iterate over each word of dictionary. Add value of the index to sum of that word. Remember word position and value if it’s greater than best.
max=0; max_index=0; foreach(dictionary as position=>word) { sum=0; foreach(word as letter) { sum += index[letter]; } if(sum > max) { max = sum; max_index = position; } }
max_index points to the word with maximum of the letters.
Some optimizations may be skipping words shorter than the current max, or starting with dictionary sorted by word length and stopping once word length drops to max currently found.
This is assuming letters from the list are allowed to repeat any number of times. If they are not, make the index contain number of given type of letters, increment sum by 1 on each find of non-zero index value and decrement the index. (reset index on each line.)
In this time optimizations could be, on top of the previous ones: abort checking word if less than max-sum letters remain, abort operation if word with all letters is found.
1
I’d like to add to SF’s solution a bit. Not sure if I am correct in my analysis, but anyway:
If your preprocessing is free (considering search will be done enough times) you can preprocess each word in a dict by producing for it an entity like the index SF mentioned. So each word becomes a number like
01000100011101.... (up to the last letter a set of words can consist of)
where each 1 represents if the word has this letter or not (let’s skip the case where it has double or more for simplicity).
You can further arrange this transformed dict by the amount of different letters the word has, to start searching with ones that contain most letters and be able to cut off search early once you enter the range that can’t posibly have more matches than you already have.
When iterating over each word of a dictionary (now reduced to numbers) you simply XOR this number with the number produced from the set of the letters you are looking for and calculate its hamming weight(the number of 1s) which is essentially what you are looking for.
https://en.wikipedia.org/wiki/Hamming_weight
As the size of input is always constant (the size of the alphabet instead of variable word size) it can be done efficiently with publicly available algos. This way you wont have to compare letters one by one. Dealing with double/triple letters can be done by augmenting structures I suppose.
Just want to provide an alternative solution. If the number of random letters is small, you can try to generate all different permutations of the letters and check whether the dictionary contains a permutation.
For example, if you only have 3 letters “a”, “b” and “c”.
All the permutations are “abc”, “bca”, “cab”, “ab”, “bc”, “ac”, “a”, “b” and “c”. You can go from the longest ones to the shortest and check whether the dict contains the word.
It is more efficient when the dictionary is large and the number of random letters is small.
1