The general setup for this question is best described in the following article: Solving Complex Ordering Challenges with Amazon SQS FIFO Queues.
The concept is that we are working with an auctioning system that processes bids on various auctions. Bids come in to a FIFO queue and are grouped based on their associated auction, so that bids for each auction are processed in sequence. We’ll say that the messages are consumed by a lambda with a batch size of 5. Additionally, a dead-letter queue is setup to capture any failing messages that are received 5 times. Each bid relies on the bid before it in the same auction being processed, so the lambda will fail on bid #3 if bid #2 was never processed.
In regards to error handling, AWS recommends, when using a FIFO queue, that your lambda function should “stop processing messages after the first failure and return all failed and unprocessed messages… This helps preserve the ordering of messages in your queue.”
My question is then this: What if a valid message is continually placed in a batch that contains messages that fail? AWS recommends to fail the entire batch, so even the valid message would go back to the queue, and if this occurs multiple times, the valid message will be sent to the DLQ with the other failing messages. And what if this causes subsequent messages in the same group to fail, having a cascading effect? Is there a ‘recommended’ way to address this? Can we specify that a batch should only ever contain messages from the same message group?
Example Scenario:
-
The queue has the following messages/bids. The letter denotes which auction/message group the bid belongs to:
<Oldest — Newest>
A1, B1, A2, A3, B2, B3, B4, B5
-
The queue releases the first batch of 5 to the lambda processor (see ‘Receiving messages’ for how this batch is created):
A1, A2, A3, B1, B2
-
Let’s say A2 fails because of some data issue within the message itself, so all failed and unprocessed messages are returned to the queue. Then processing is attempted 4 more times with the same batch, failing because of A2 failing each time, so all messages in our batch are moved to the DLQ. This leaves the following messages in the queue:
B3, B4, B5
-
The queue then releases the remaining messages in a batch to be processed. However, because B2 was never processed, B3 will always fail, so we end up with almost all of our messages in the DLQ just because A2 had bad data.
(This scenario is slightly simplified. A1 would be successfully processed and not returned in the first attempt, and B3 would be added to the batch in the second attempt.)
I created a small POC in AWS just to verify that this situation could occur, and it did on my first attempt.
In the actual implementation for the system, we implemented partial batch responses in a slightly different way than recommended by AWS. Instead of returning the rest of a batch for one failure, messages are returned only if a previous message in its same message group fails. If there are messages in the batch from different groups, processing is still attempted for those messages. Applied to the example scenario, this would leave only A2 and A3 in the DLQ with the other messages successfully processed. This approach works, but I am not sure if it is ‘good practice’. And the whole situation still makes me question the recommendation from AWS for implementing partial batch responses from FIFO queues.
Any insights would be much appreciated!
1