Calling show() or other similar actions in Spark 3.5.0 (pyspark) dataframe for a column read from postgresql with datatype bpchar throws an OutOfMemoryError. This error does not occur in lower version of Spark (3.4.1)
A similar issue is reported in https://www.mail-archive.com/[email protected]/msg80734.html
Code for replication:
CREATE TABLE test.test_table (id bpchar);
df = spark.read.format("jdbc").option("url", f"jdbc:postgresql://{host_db}:{db_port}/{dbname}").option("driver", "org.postgresql.Driver").option("query", 'select id from test.test_table').option("user", n).option("password", s).load()
df.show()
This will result in
ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
java.lang.OutOfMemoryError: Requested array size exceeds VM limit
at org.apache.spark.unsafe.types.UTF8String.rpad(UTF8String.java:880)
at org.apache.spark.sql.catalyst.util.CharVarcharCodegenUtils.readSidePadding(CharVarcharCodegenUtils.java:62)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:43)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:388)
at org.apache.spark.sql.execution.SparkPlan$$Lambda$2449/0x0000000801093040.apply(Unknown Source)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:890)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:890)
at org.apache.spark.rdd.RDD$$Lambda$2450/0x0000000801094040.apply(Unknown Source)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
at org.apache.spark.scheduler.Task.run(Task.scala:141)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
at org.apache.spark.executor.Executor$TaskRunner$$Lambda$2410/0x000000080105b840.apply(Unknown Source)
at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)