This is my try for writing pyspark dataframe as a delta table.
It fails with “AnalysisException” showing that I am trying to write column names with special characters. I presume column mapping should handle that but it does not therefore I think this code fails to enable column mapping.
Documentation has no pyspark version on how to do it (only the sql one). How to enable column mapping using pyspark ?
df
.write.format("delta")
.option("delta.columnMapping.mode", "name")
.option("mergeSchema","true")
.mode("overwrite")
.save("/mnt/some/path/table.delta")