I see always 1000 task/partitions getting created for a spark jobs with AQE enabled. If I execute job for monthly(4 times weekly data) or a week data, the shuffle partitions are same.Whis is nothing but number of task running is 1000. Hence it’s throwing memory issues. Is there any parameter enables max 1000 partitions.