How can I resolve the out-of-memory error in aws lambda with lots of fetching data from Mongo that I don’t want to increase the memory?
I have lambda on aws that fetches data from opensearch and saves it on s3 bucket data most of the time with small data, but I have users with big data, and then the lambda fails. Is it good to fetch the data with pagination and upload it to the S3 bucket using stream? Or would you happen to have another idea of how to resolve it?