I am trying to develop a javascript spellchecker that doesn’t use a dictionary, and can correctly, given a single word, detect if a word is spelled correctly or not. Right now, I just have a list of sub strings that never occur within words, and if the word contains one of those sub strings, I count it as misspelled. For example, I would have a substring “lll”, and if a word contains “lll” it would be counted as misspelled (such as “I’lll”).
However, I’m finding this doesn’t work as well as expected. Most misspelled words seem to involve letters in the wrong order, or words that don’t follow common rules. The above approach doesn’t work for either of these issues. For example, there is no good substring for the spelling “accidant”.
I’m looking for a more effective method of determining if a word is probably misspelled or not, ideally something that solves the issues of letters in incorrect order and keys near the correct letter on a keyboard accidentally being pressed (but solutions to other common causes of misspellings are fine).
This is english-only, so it doesn’t need to work with other languages.
Also, false positives are a much larger problem for me than false negatives, so I would prefer to err on the side of saying words are spelled correctly when in fact they are not.
11
Since I’m working on a similar problem myself, I can provide some guidance.
The quickest way I’ve found to find errors (but not necessarily correct them) is to use n-gram searches. You can store these in an array which is the nth-power of the alphabet size. Given an array @words
that includes every single word in a corpus of texts from your language and an trigram (n-gram of three elements):
my %ngrams;
my $ngramSize = 3;
for my $word (@words) {
next if length($word) < $ngramSize;
$trigrams{substr($word,$_,$ngramSize)}++ for (0..length($word) -$ngramSize);
}
You would probably want to normalize the data in some way, and then you can store it more efficiently. For example, you could take the median occurrence count and set that to be 255, clipping any values higher, and then proportioning out anything less). That would let you store, for instance, a rough English spell checker using trigrams in 17K, or even as little as 2K if you’re willing to go for a black-or-white good/bad trigram (and since most trigrams will probably not exist, you can probably perform even further compression).
Because that would load very quickly, you could use that to quickly generate candidates with 90% accuracy and then, once a full and proper spell checker is downloaded, use that, prioritizing the likely misspelled ones before checking the likely correct ones. If you’re expecting the user to use your site regularly, you can also save the dictionary to local storage for virtually instant recall, rather than have them need to download it every single time.
But English spelling, with our incredibly irregular spelling and constant importing of words without adaptation, absolutely requires a dictionary (although, because we are primarily an analytical language, we actually can store all of the dictionary in memory unlike highly inflected or polysynthetic languages)
Simple heuristics like “no triples” and similar should be safe, and there may be others like that which you should add, but they’ll catch only a few of the possible errors. For the rest, you won’t get far without having a store of as many words as possible.
The best way I know of compressing dictionaries losslessly while still having fast lookup, is a DAWG/MA-FSA, see http://stevehanov.ca/blog/index.php?id=115 (and the intros to tries on that same site) for a friendly introduction. The main attraction with the DAWG is that you can very quickly check if something is misspelt, while typically using less memory than a hash table would.
You can also store these on-disk (or send them over the network) using 4 bytes per labelled edge. Taking only the 63001 words matching ^[a-z]+$ from the English aspell dictionary /usr/share/dict/words, I get 49829 edges, which should be possible to store in ~194K – not much larger than angular.js 😉 But on the other hand, if you have mod_deflate on your server, gzip handles the a-z’s in 163K (you might also find http://pieroxy.net/blog/pages/lz-string/index.html interesting).
Read also http://norvig.com/spell-correct.html if you haven’t already.
But then you say you want to err on the side of false negatives; that’s going to be harder if you don’t want to include a big dictionary. You can of course skip checking any word matching [^a-z] if you know that your dictionary only has the a-z’s. And you can stick with single-edit-distance errors (since higher-edit-distance is more likely to be false positives).
Another alternative, if you want to go all the way on “false-negatives-only”, is to find a big list of common spelling errors, and simply use that instead of a list of correct words. That’ll be very “undynamic”, but if you work from frequency (priorities the most frequent errors), it might end up being a bit useful. You can also pre-spell your expected input, if you already know a lot about what kind of text will be written; e.g. if you want to make a speller for a forum where people talk about a specific subject, you can first scrape all the text in that forum, then run it through aspell and catch all the errors (and suggestions), and only store the top N of those.