I’m running a PySpark job on AWS Glue, and it sometimes fails with an “Executor Lost” error. The job reads data from an S3 bucket, processes it, and writes the output back to another S3 location. The goal of the job is to remove the last partition and consolidate the data into a single Parquet file. Here is the relevant part of my code:
import sys
from awsglue.job import Job
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from pyspark.sql import SparkSession
from pyspark.conf import SparkConf
from pyspark.sql.functions import col
def main():
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
spark = glueContext.spark_session
input_path = "s3://your-bucket-name/your-input-directory/"
df = spark.read
.option("mergeSchema", "true")
.parquet(input_path)
.drop("year_month")
df
.repartition("key1", "key2", "key3")
.write
.mode("overwrite")
.partitionBy("key1", "key2", "key3")
.option("compression", "snappy")
.parquet("s3://your-bucket-name/your-output-directory/")
job.commit()
if __name__ == '__main__':
main()
Error Details:
The job fails with the following error message:
Lost executor 51 on 10.30.46.91: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages.
The job takes ~1 hour to fail or to succeed. Wether it fails or succeeds the log displays several of those messages. The entire source dataset has ~10GB with ~1250 with partition file sizes ranging from 4MB to 20MB. The output dataset has 5 partitions with file shrunk to ~70MB. The underlying dataframe has ~70k columns with 20 years of a daily series line index.
Also it seems that only in the beggining of the job all workers are used. They are mostly removed and only one kept until the end of the job.
What I’ve Tried:
- Adjusting the number of partitions on repartitioning.
- Using coalesce and removing mergeSchema
- Ensuring that the data is properly partitioned and sorted.
- Using the G2X worker instance types with up to 20 workers
Job Objective:
The primary objective of this job is to remove the last partition and consolidate the data into a single Parquet file. This involves reading the data from the source S3 bucket, dropping the ‘year_month’ column, repartitioning the data based on three other keys, and then writing the consolidated data back to a different S3 location.
Questions:
- What could be causing the “Executor Lost” error in this context?
- How can I optimize the job to handle large data processing without losing executors?
- Are there specific configurations in AWS Glue or PySpark that I should consider adjusting?
Any insights or suggestions would be greatly appreciated!
Felipe Francesco is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.