I trained tf-idf on a pre-tokenized (unigram tokenizer) dataset that I converted from list[list(token1, token2, token3, ...)]
to an RDD using pyspark’s HashingTF and IDF implementations. I tried to save the RDD with tf-idf values, but when I saved the output to a file and then loaded it from the file. The loaded file outputs an RDD that is the original saved RDD but with the order of the SparseVectors randomized
All the parts of my code that matter:
from pyspark.mllib.feature import HashingTF, IDF
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('tf-idf').getOrCreate()
sc = spark.sparkContext
data = load_from_disk("pre cleaned data")
tokenizer = Tokenizer.from_file("pre trained tokenizer")
tokenized_data = tokenizer.encode_batch(data["content"])
tokenized_data = [doc.tokens for doc in tokenized_data] #converting tokenized data to ONLY list of each document tokenized
rdd_data = sc.parallelize(tokenized_data) #converting to RDD so it works with IDF
hashingTF = HashingTF(numFeatures = 1<<21)
htf_data = hashingTF.transform(rdd_data)
idf = IDF().fit(htf_data)
tfidf_data = idf.transform(htf_data)
tfidf_data.saveAsPickleFile("some/path")
print(tfidf_data.collect()) # Outputs a list of sparse vectors containing numFeatures and a dictionary of hash and tf-idf values, looks like this: list[SparseVector(NumFeatures, {hash_value: tf-idf_value, ...}), ...]
# ----- pretend like you are in a new function or file now -----
spark = SparkSession.builder.appName('tf-idf').getOrCreate()
sc = spark.sparkContext
ti = sc.pickleFile("some/path")
print(ti.collect()) # Outputs a list of sparse vectors containing numFeatures and a dictionary of hash and tf-idf values, looks like this: list[SparseVector(NumFeatures, {hash_value: tf-idf_value, ...}), ...] HOWEVER this time the order of the SparseVectors is not the same as the order when originally saved, but all the SparseVectors still exist somewhere in the RDD (I checked this, it just seemingly randomizes the order for some reason when loading the pickle file)
I tried to print the types and print(type(tfidf_data))
is <class 'pyspark.rdd.RDD'>
while print(type(ti)
is <class 'pyspark.rdd.RDD'>
, but they aren’t the same thing even though I am using the basic saving and loading functions.