I have a Python app that’s streaming logs to a third-party service (in this case, it’s an AWS S3 bucket).
My simplified code looks like this:
class S3Handler(logging.StreamHandler):
def __init__(self, bucket_name, key):
super().__init__()
self.bucket_name = bucket_name
self.key = key
self.s3_client = boto3.client('s3',
endpoint_url=...,
aws_access_key_id=...,
aws_secret_access_key=...)
def emit(self, record):
try:
log_entry = self.format(record).encode('utf-8')
self.s3_client.put_object(Bucket=self.bucket_name, Key=self.key, Body=log_entry, ACL="public-read)
except Exception as e:
print(f"Error while logging to S3: {str(e)}")
s3_handler = S3Handler(bucket_name='mybucketname', key='path/to/logfile.txt')
logging.getLogger().addHandler(s3_handler)
The goal is that when I run my script, all logs would be saved in the S3 bucket. The script can run for 30 seconds or it can run for 10 hours.
When I run this code, I get immediately this error:
Error while logging to S3: maximum recursion depth exceeded while calling a Python object
After some googling, I found out that the reason behind this error is exceeding the default quota of 1000 iterations that Python provides. There’s a way to increase this quota, but I don’t find it practical, or feasible for this particular case.
Apparently, my approach to streamline the script’s logs to an S3 bucket is not ideal. What are the options to resolve this problem?