Is it possible to read a csv file in pyspark with a larger schema?
I’m migrating csv files to parquet, and the schema of my csv files changed during the time.
So i’m trying to read some csv files that contains 12 columns with a schema that contains 20 columns.