I trying to save same dataframe to two different directories.
print(out_path)
s3://.../out/2012-02/
print(curr_repo_path)
s3://.../consolidate_repo_hist/
new_consolidated_repo.write.mode("overwrite").parquet(out_path) # this code works
new_consolidated_repo.write.mode("overwrite").parquet(curr_repo_path) # this does not.
If I interchange the order, whichever write statement is executed first works, and the statement executed later fails.
Error is like below –
An error was encountered:
An error occurred while calling o612.parquet.
: org.apache.spark.SparkException: Job aborted...
.
.
.
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 147 in stage 865.0 failed 4 times, most recent failure: Lost task 147.3 in stage 865.0 (TID 15997) (ip-10-16-0-36.us-west-2.compute.internal executor 59):
org.apache.spark.sql.execution.datasources.FileDownloadException: Failed to download file path: s3://.../consolidate_repo_hist/part-00385-9f605c56-0f6f-449c-b140-f1f2929d076e-c000.snappy.parquet
I do not understand why it tries to download data when I am trying to write. It does delete the existing directory before writing, however while writing it throws this error. If the existing directory is empty, then the code works properly. So the 2 necessary conditions for the write statement to fail are – it should be second write statement, and the destination should not be empty.
I think my question my be slightly related to this:
Spark Scala EMR Job fails to download file from S3
, however, I am not entirely sure.