I want to quickly find all values and columns in a Pyspark dataframe where there is whitespace (Matches any space, tab or newline character.) or any space like character (like ascii zero) in the data.
It looks like one way to find this is to fix it but that doesn’t give a count or show the problem.
I am using DATABRICKS_RUNTIME_VERSION: 11.3.
I created a dataframe with spaces in the column header names and in the data. Then tried using regex to find these specific rows. This looked promising for one column but when i tried combining columns the information returned is a mix of cells that have spaces or other white space and cells that don’t.
Note that using “not rlike s ” eg. not like whitespace showed me the right data in this first response but I don’t understand why it shouldn’t be “rlike s”.
from pyspark.sql import functions as F
from pyspark.sql import DataFrame
from functools import reduce
from pyspark.sql.functions import col,isnan, when, count
import re
x_df = spark.createDataFrame([('okay','', ' ','epw' ),('okay',' ', 'n','ldh' ),('okay',' this', 'is','one ' ),('okay','t wo', ' lo ll ','th ree ' ),['nowhitespace',"nowhitespace", "nowhitespace", "nowhitespace"],[' 98117','92667', '00 101', '21100 ']], ['no_ws',"d", " e", "f "] )
xb_df = x_df.withColumn('raisin', F.lit(None).cast('string'))
tmp1_df = xb_df.createOrReplaceTempView("xb_df")
# display(sql_xcount_df)
sql_rlike_df = spark.sql(f"""select ` e` from xb_df where ` e` not rlike 's'""")
display(sql_rlike_df)
First Dataframe xb
sql_rlike_df
above returns 3 rows from column e, only the rows with whitespace. The row with “is” is not returned and this is correct.
In order to include column headers, I stripped them out into a new dataframe with a schema then unioned that to the dataframe above. The new headers are “good” -where there is no whitespace in any column, then t1-t4. This is tedious but the only way i could find to include the columns and my dataframe is small. Finally, i filter the dataframe using the regex for white space that worked on the column above
col1_lis = xb_df.columns
col_test_s_schema = StructType([
StructField("good", StringType(), True),
StructField("t1", StringType(), True),
StructField("t2", StringType(), True),
StructField("t3", StringType(), True),
StructField("t4", StringType(), True),])
col_test_df = spark.createDataFrame(data=[col1_lis], schema=col_test_s_schema)
all_vals_to_check = col_test_df.union(xb_df)
check_vals = all_vals_to_check
.where("""t1 not rlike 's' OR
t2 not rlike 's' OR
t3 not rlike 's' OR
t4 not rlike 's' OR
good not rlike 's'
""")
display(check_vals)
After the final dataframe, this is what i get
This shows all the columns – even though one column (good) had no whitespace in any cell. The version you see here has a final row added and now there is whitespace – a leading space- in front of 98117.
It shows all the rows except the one that has no whitespace in any cell (all populated with nowhitespace or null).
What this doesn’t do is return only the cells. For example t1 column cell with d has no white space, but e and f do (it is not possible to see unless you look at how the df was made above).
If i were to count these rows, it would not be an accurate count of cells with whitespace.
How is this usually done? Just to reiterate, the goal is to identify the whitespace in pyspark, not remove it.