I have an issue with the logs from my AWS Glue jobs being corrupted.
The logs are emitted from Glue Python scripts ad go to CloudWatch. I recently started posting structured JSON logs with AWS EMF, and noticed that the logs were being corrupted. Some log statements in excess of 1024 bytes are split into multiple CloudWatch log entries, while sometimes smaller log statements are grouped into a single CloudWatch log entry. This obviously breaks JSON messages, and confuses other log analysis queries.
The issue seemed to begin when many log messages were emitted in quick succession. I saw a log entry that had a large ~6000b JSON payload that was unsplit in the beginning of a log, before everything devolved into the 1024 byte chunks when a lot of lines were printed in quick succession.
The jobs are Spark 3.3 jobs written in Python 3.9 and running on Glue 4.0. The affected logs are the one produced by the Python script and not the ones produced by Spark.
I did some testing, and wrote a script that does nothing except initialize Glue and then output log messages using either Python’s print function, a Python logger, or the logger that comes from calling getLogger()
on a GlueContext
, and my conclusion is that either the CloudWatch agent where the Python is executed is misconfigured, or some intermediary process, between the running script and the CloudWatch agent, is.
The script I used for testing is the following: https://gist.github.com/mhvelplund/961c8868fbfdf857dcd0a623a353870b
Running the script with continuous logging enabled and using the Glue logger (--TYPE=glue
), everything works fine but the logs go in the same log stream as the Spark logs, which contain a lot of Java noise.
Running with a Python logger (--TYPE=log
) or with print statements (--TYPE=print
) is where problems arise. Log lines are grouped or split, seemingly arbitrarily, and not necessarily in the same manner from one run to another. This indicates that the issue is time related. Using a version of the script without the output
delegation, but just raw print statements, I was able to get every printed line in a single CloudWatch message.
Inserting a slight delay of as little as 100 ms (--DELAY=100
) before each output statement, the grouping and splitting issues disappeared.
Unfortunately, using the Glue logger is not a good solution since I have legacy code in my real scripts that use the Python loggers and raw print statements, and changing those would be painful.
Has anyone encountered this problem with CloudWatch or Glue before, and if so, how did you solve it without monkeypatching sys.stdout
? ????