We had a situation where spawning 3000+ child jobs with perform_async
inside a batch.jobs
block caused all child jobs to “bunch up” and prevented other jobs from executing. The pseudo code looks like:
batch.jobs do
orders.each_batch do |order_batch|
OrdersWorker.perform_async(order_batch)
end
end
We solved it by changing perform_async
to perform_in
:
batch.jobs do
orders.each_batch.each_with_index do |order_batch, index|
OrdersWorker.perform_in(index * 5.seconds, order_batch)
end
end
Is this the correct approach? Or is there a more canonical solution?
Our solution changed all the child jobs into scheduled jobs and only moved them onto the queue when the scheduled time arrived. This allowed other standalone jobs to be enqueued and executed.
It solved our immediate problem, but I am not certain that this is the correct solution, and am looking for a better one if it exists.
We were on the Pro subscription when the above happened, and now we’re on Enterprise. Will Enterprise’s rate limiting help in this case?
Bosco So is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.