I have a table in Oracle, it contains 1000 colums and 1 row. I try to copy this table to HDFS with pySpark. But it faild with error: Container marked as failed. Exit code is 143. Then i limit columns to 200. It works. Is there a limit on the number of columns to be read? I use jdbc format for connect to Oracle.
spark.conf.set("spark.dynaminAllocation.maxExecutors", "50")
spark.conf.set("spark.executor.cores", "4")
spark.conf.set("spark.executor.memory", "15G")
spark.conf.set("spark.executor.memoryOverhead", "30G")
df = (spark.read.format("jdbc")
.option("dbtable", query)
.option("customSchema", schema)
.option("fetchsize", str(1_000_000))
.option("partitionColumn", PARTITION_COLUMN)
.option("numPartition", 10)
.option("lowerBound", 1)
.option("upperBound", 10)
.load()
)