PySpark: Why does [col(c).isNull().count() for c in df.columns] throw an error, while df.filter(col(c).isNull()).count() works?
I’m working with a PySpark DataFrame and trying to count the number of null values in each column. I tried the following expression:
Pyspark Data frame not returning rows having value more than 8 digits
I have created a sample data frame in Pyspark and the ID column contain few of values having more than 8 digits number. But its return only those row having less than 8 digits values in ID field. Can anyone please suggest how to write a proper code that will return all the values if the condition is matched.