So as an assignment in my masters course we were given an assignment to create bigram and trigram language models and use different forms of smoothing to see how each method affects the perplexity of the model. Our smoothing methods consist of Good-Turing smoothing and Kneser-ney smoothing, but I am stuck at the Good-Turing smoothing method.
My current issue is when i train the model on a training set and then calculate the perplexity on the same set i get a certain value but if i increase the size of the training set and train again the perplexity i calculate is higher than with a smaller set. I understand that perplexity directly translates to the capability of a model and a larger training set should be able to predict continuations more accurately which would result in a smaller perplexity, which isn’t what i am facing.
I’ve tried looking around for more details on Good-Turing smoothing but i was unable to come up with a formula that would correct my problem. I will provide the formulas i’d been using for my calculations.
For the smoothing i used my professors formula which looks like this:
Formula for smoothed probabilities (Good-Turing)
where i am calculating (1 * N1)/N0
(N1 and N0 being the number of bigrams that appear once and 0 times respectively). And for the calculation of probability of unseen bigrams (that don’t appear in the training set) i used this formula:
Formula for unseen n-grams
As for N0
i used the approximation formula:
Unseen bigram probability approximation
Where V
is the size of the vocabulary and Nb
is the number of unique bigrams in the model.
As for the perplexity calculations i used this formula:
Perplexity of bigram model using logarithms
m
in this case is the number of unique bigrams in the set
Noting all of this here is my current code. I have a bigram model defined as:
typedef unordered_map<string, int> UnigramCounts;
typedef unordered_map<string, UnigramCounts> BigramCounts;
class BigramModel {
private:
UnigramCounts unigrams;
BigramCounts bigrams;
double modelUnseenBigramProbability;
public:
void train(const vector<string>& sentences);
void smoothGoodTuring();
void serialize(const string& filename);
void deserialize(const string& filename);
double calculatePerplexity(const vector<string>& words);
};
My training function is a simple nested for loop to go over preprocessed words of my training set and build unigrams and bigrams like so:
void BigramModel::train(const vector<string>& words) {
for (size_t i = 0; i < words.size(); ++i) {
unigrams[words[i]]++;
if (i < words.size() - 1) {
bigrams[words[i]][words[i + 1]]++;
}
}
}
Then i use the smoothing function to go over the model and smooth out the bigram counts and calculate the unseen bigram probability (which Good-Turing is supposedly good at) and store that into the model class like so:
void BigramModel::smoothGoodTuring() {
unordered_map<int, int> countOfCounts;
int totalBigrams = 0;
int uniqueBigrams = 0;
int unseenBigrams = 0;
for (const auto& prevWord : bigrams) {
for (const auto& entry : prevWord.second) {
countOfCounts[entry.second]++;
totalBigrams += entry.second;
uniqueBigrams++;
if (entry.second == 1) {
unseenBigrams++;
}
}
}
int vocabulary = unigrams.size();
double zeroProbBigramEstimate = vocabulary * vocabulary - uniqueBigrams;
double unseenBigramProb = ((double)unseenBigrams / zeroProbBigramEstimate) / totalBigrams;
modelUnseenBigramProbability = unseenBigramProb;
// Apply Good-Turing smoothing to bigram probabilities
for (auto& prevWord : bigrams) {
for (auto& entry : prevWord.second) {
int count = entry.second;
if (countOfCounts.find(count + 1) != countOfCounts.end() && countOfCounts.find(count) != countOfCounts.end()) {
double smoothedCount = ((count + 1) * ((double)countOfCounts[count + 1] / countOfCounts[count])) / totalBigrams;
entry.second = round(smoothedCount * totalBigrams);
}
else {
entry.second = round(unseenBigramProb * totalBigrams);
}
}
}
}
And finally i calculate the perplexity like so:
double BigramModel::calculatePerplexity(const vector<string>& words) {
double logProb = 0;
int uniqueBigrams = 0;
double prob = 0;
set<pair<string, string>> uniqueBigramsSet;
for (int i = 1; i < words.size(); ++i) {
if (uniqueBigramsSet.find({ words[i - 1], words[i] }) == uniqueBigramsSet.end()) {
// Increment unique bigrams count and add to the set
uniqueBigrams++;
uniqueBigramsSet.insert({ words[i - 1], words[i] });
}
}
for (int i = 1; i < uniqueBigrams; ++i) {
double probability;
if (uniqueBigramsSet.find({ words[i - 1], words[i] }) != uniqueBigramsSet.end() && bigrams[words[i - 1]][words[i]] != 0) {
probability = (double)bigrams[words[i - 1]][words[i]] / unigrams[words[i - 1]];
}
else {
probability = modelUnseenBigramProbability;
}
logProb += log2(probability);
}
cout << logProb << endl;
double perplexity = logProb / -uniqueBigrams;
return perplexity;
}
I’ve tried changing the formulas many times but this is as close as i get to the “correct” answer but i still get higher perplexities when i have larger training sets. I’m sure my mistake is something minimal but i don’t see it as i’ve been staring at this for a few hours straight.
Let me know if anyone needs any other functions to test stuff and thanks for any and all help.
Beni12345612 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.