I wish to create a dataframe by using a map such that the keys of the map are the column titles, and the values of the map are the data itself. In python and pyspark, this can be done quite easily in numerous ways- typically with one line of code, even-, but in Scala, I’m having serious trouble.
Below is the equivalent in python of what I’m trying to accomplish in Scala:
#In Python:
Example_Map_aka_Dictionary = {"Key 1":["Value 1"],
"Key 2":[111111.1111],
"Key 3":[["Value_n"]]}
#Method 1:
import pandas as pd
pd.DataFrame(Example_Map_aka_Dictionary)
#Method 2:
import pyspark.pandas as ps
ps.DataFrame(Example_Map_aka_Dictionary)
#Method 3:
spark.createDataFrame(data=[[Value[0] for Value in Example_Map_aka_Dictionary.values()]], #Note the extra brackets []
schema=list(Example_Map_aka_Dictionary.keys()))
Note the extra brackets in Method 3. I think what could be happening is I’m unable to recreate this double-bracket effect in Scala.
According to Tutorial… Step 2: Create a DataFrame, the below code’s format works, except I cannot figure out how to recreate Seq((...))
with that extra pair of parentheses:
//From https://docs.databricks.com/en/getting-started/dataframes.html#language-scala
val data = Seq((2021, "test", "Albany", "M", 42)) //notice the extra parentheses ().
val columns = Seq("Year", "First_Name", "County", "Sex", "Count")
val df1 = data.toDF(columns: _*)
In case there’s something wrong with how I created the map itself, I’ve included how it was built below:
val Example_Data = Seq(("Big_Data_Value_1", 123.45, List("Big_Data_Value_n")))
val Example_Columns = Seq("Big_Data_Column_1", "Big_Data_Column_2", "Big_Data_Column_n")
val Example_df = Example_Data.toDF(Example_Columns: _*)
Example_df.show()
+-----------------+-----------------+------------------+
|Big_Data_Column_1|Big_Data_Column_2| Big_Data_Column_n|
+-----------------+-----------------+------------------+
| Big_Data_Value_1| 123.45|[Big_Data_Value_n]|
+-----------------+-----------------+------------------+
val Example_Map_after_Complicated_Operations = Example_df.columns
.map(Column_Title => Column_Title -> "Example String after If Statements. I'd like to script this as an org.apache.spark.sql.Column, but not even this string works")
.toMap
The way I see it, it would be more computationally efficient to build a dictionary and convert it into a dataframe than it would be to create a var
df that’s an empty copy of the Example_df
and update it in a loop over and over with .withColumn
. I know that in the most updated spark, there’s the command withColumns
which would probably be amazing, but 1. I want to learn how to convert a map into a dataframe because I’m sure I’ll need to in the future if not now, 2. I literally can’t acquire the latest Spark, because cluster configurations are completely out of my control (yes, it’s a nightmare where I work, hence why I need to do whatever I can in Scala for the extra speed).
As you can see below, it’s not so simple to recreate the .toDF
val Values_format_1 = Example_Map_after_Complicated_Operations.values //error: value toDF is not a member of Iterable[String]
val Values_format_2 = Seq(Values_format_1)//error: value toDF is not a member of Iterable[String]
val Values_format_3 = List(Values_format_1) // error: value toDF is not a member of List[Iterable[String]]
val Values_format_4 =Values_format_1.toList//IllegalArgumentException: requirement failed: The number of columns doesn't match.
val Values_format_5=List(Values_format_4) //IllegalArgumentException: requirement failed: The number of columns doesn't match.
val Values_format_6=List((Values_format_1)) //error: value toDF is not a member of List[Iterable[String]]
val Values_format_7 = ((Values_format_1)).toList //IllegalArgumentException: requirement failed: The number of columns doesn't match.
val Values_format_8 = (Values_format_1).toList
val Keys=Example_Map_after_Complicated_Operations.keys.toList
Values_format_8.toDF(Keys: _*)
IllegalArgumentException: requirement failed: The number of columns doesn't match.
Old column names (1): value
New column names (3): Big_Data_Column_1, Big_Data_Column_2, Big_Data_Column_n
I can’t decipher this error message either. How does it think there are “old” and “new” column names, when brand new dataframe is being created? Where is the term “value” coming from?
Lastly, I’ve seen the following solution proposed all over the internet, but I fail to see its applicability, since it can only return the df in a 2-column format (and upon further research, it can’t even be transposed, because it’s so inefficient, even spark itself will try to stop you if there exists more than 1000 would-be columns):
val spark = SparkSession.builder.getOrCreate()
import spark.implicits._
val m = Map("A"-> 0.11164610291904906, "B"-> 0.11856755943424617, "C" -> 0.1023171832681312)
val df = m.toSeq.toDF("name", "score")
df.show
+----+-------------------+
|name| score|
+----+-------------------+
| A|0.11164610291904906|
| B|0.11856755943424617|
| C| 0.1023171832681312|
+----+-------------------+