So, I’m trying to evaluate classification models using only my train dataset. I tried using cross-validation and splitting train dataset into train-test datasets but these two approaches are giving me very different results.
My restriction is using SVM, Perceptron, Linear Regression and Naive-Bayes for my classification problem, measure needs to be micro f1 measure and I have to get micro f1 score above 0.70 on test dataset that I don’t have (means I won’t be able to try my solution until it’s completely finished).
This is just part of my training dataset given to me:
[
{
"strofa": "Ne znam sta misli devojka ta kako je mogla biti tako zla necu da cujem za njega i nju moju bivsu drugaricu",
"zanr": "pop"
},
{
"strofa": "Mala sala ali dobar klub stojim, gledam naslonjen na stub u sali lom gore s plafona kaplje voda pravo na mikrofon u sali lom a mene gadja svaki ton",
"zanr": "rock"
},
{
"strofa": "Sinoc zvezda s neba pade jedna ljubav s njom nestade nesta zvezde divnog sjaja nesta toplih zagrljaja",
"zanr": "folk"
}
]
In whole dataset, I have only three classes: pop, rock and folk. I didn’t want to change original keys and values so I don’t break something, so “strofa” means stanza and “zanr” means genre.
I need to teach model using training dataset so I can predict genre based on stanza in test dataset. In training dataset I have 1600 entries for each class.
This is how I performed cross validation:
import pandas as pd
from sklearn.svm import SVC
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import cross_validate
df = pd.read_json("data/train.json")
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(df["strofa"])
estimator = SVC()
scores = cross_validate(estimator=estimator,
X=X,
y=df["zanr"],
scoring="f1_micro",
return_train_score=True,
cv=30)
Average value of test_score
in scores
is around 0.54.
This is how I performed train-test validation:
import pandas as pd
from sklearn.svm import SVC
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import f1_score, classification_report
# Idea is to get random 100 samples from each class (each class has 1600 entries) and make them test dataset and rest to use as training dataset
test = df.groupby('zanr').apply(lambda x: x.sample(100)).reset_index(drop=True)
train = df.drop(test.index)
vectorizer = CountVectorizer()
X_train = vectorizer.fit_transform(train["strofa"])
X_test = vectorizer.transform(test["strofa"])
svc = SVC()
svc.fit(X_train, train["zanr"])
predictions = svc.predict(X_test)
score = f1_score(test["zanr"], predictions, average="micro")
report = classification_report(test["zanr"], predictions)
Score is varying around 0.90. I will also provide classification report if it means something.
precision recall f1-score support
folk 0.93 0.94 0.94 100
pop 0.92 0.85 0.89 100
rock 0.89 0.95 0.92 100
accuracy 0.91 300
macro avg 0.91 0.91 0.91 300
weighted avg 0.91 0.91 0.91 300
That’s it. I’m not really sure if I’m doing everything as I should.
PS
This solution is using basic models without preprocessing. In my original solution, I’m removing stopwords. Since it’s written in Serbian, I can’t do stemming or lemmatization, so if you have some advice as to what I can use to preprocess data so it performs better, I will be thankful. I also tried using all models from my first paragraph (although didn’t tune hyper-parameters) and TF-IDF vectorization but all of it gives the same results.