Why do we use LSTMs over other architectures for character-based language identification (LID) from short-strings of text when the LSTM’s power comes from its long-range dependency memory?
For example, Apple released an industry blog post stating that they use biLSTMs for language identification: https://machinelearning.apple.com/research/language-identification-from-very-short-strings
And then this paper tried to replicate it: https://aclanthology.org/2021.eacl-srw.6/
I was reading Karpathy’s famous article on RNNs while trying to train a small language identification model for practice. I first tried a simple, intuitive (for me) method: tf-idf with a naive bayes classifier trained on bi- or trigam counts in the training data. My dataset has 13 languages across different language families. While my simple classifier does perform well, it makes mistakes when looking at similar languages. Spanish is often classified as Portuguese for example.
I was looking into neural network architectures and found that LSTMs are often used in language identification tasks. After reading about RNNs and LSTMs, I can’t fully understand why LSTMs are preferred for LID especially from short-strings of text. Isn’t this counter-intuitive, because LSTMs are strong in remembering long-range dependencies whereas RNNs aren’t? For short strings of text, I would have suggested using a vanilla RNN….
That Apple blog does say,
In this article, we explore how we can improve LID accuracy by treating it as a sequence labeling problem at the character level, and using bi-directional long short-term memory (bi-LSTM) neural networks trained on short character sequences.
I feel like I’m not understanding something fundamental here.
Is the learning objective of their LSTM then to correctly classify a given character n-gram? Is that what they mean by “sequence labelling” problem? Isn’t a sequence labelling task just a classification task at its root (“label given input from the test set with 1 of N predefined labels”)?
What’s the point of training an LSTM on short character sequences when you’re using an architecture that is expressly known to handle long sequences?
1