Cyber Monday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam DOP-C02 Topic 8 Question 34 Discussion

Actual exam question for Amazon's DOP-C02 exam
Question #: 34
Topic #: 8
[All DOP-C02 Questions]

A company releases a new application in a new AWS account. The application includes an AWS Lambda function that processes messages from an Amazon Simple Queue Service (Amazon SOS) standard queue. The Lambda function stores the results in an Amazon S3 bucket for further downstream processing. The Lambda function needs to process the messages within a specific period of time after the messages are published. The Lambda function has a batch size of 10 messages and takes a few seconds to process a batch of messages.

As load increases on the application's first day of service, messages in the queue accumulate at a greater rate than the Lambda function can process the messages. Some messages miss the required processing timelines. The logs show that many messages in the queue have data that is not valid. The company needs to meet the timeline requirements for messages that have valid data.

Which solution will meet these requirements?

Show Suggested Answer Hide Answer
Suggested Answer: D

Step 2: Using an SQS Dead-Letter Queue (DLQ) Configuring a dead-letter queue (DLQ) for SQS will ensure that messages with invalid data, or those that cannot be processed successfully, are moved to the DLQ. This prevents such messages from clogging the queue and allows the system to focus on processing valid messages.

Action: Configure an SQS dead-letter queue for the main queue.

Why: A DLQ helps isolate problematic messages, preventing them from continuously reappearing in the queue and causing processing delays for valid messages.

Step 3: Maintaining the Lambda Function's Batch Size Keeping the current batch size allows the Lambda function to continue processing multiple messages at once. By addressing the failed items separately, there's no need to increase or reduce the batch size.

Action: Maintain the Lambda function's current batch size.

Why: Changing the batch size is unnecessary if the invalid messages are properly handled by reporting failed items and using a DLQ.

This corresponds to Option D: Keep the Lambda function's batch size the same. Configure the Lambda function to report failed batch items. Configure an SQS dead-letter queue.

Contribute your Thoughts:

Haydee
2 months ago
Haha, reporting failed batch items? Genius! That's the way to go. Keep it simple, keep it efficient.
upvoted 0 times
...
Yun
2 months ago
Transfer Acceleration on S3? Really? That's overkill. Just report the failed batch items and let the dead-letter queue handle the invalid data.
upvoted 0 times
Renea
1 months ago
A: Keeping the batch size the same and reporting failed items is the way to go.
upvoted 0 times
...
Jaime
1 months ago
B: Definitely, the dead-letter queue can handle the invalid data efficiently.
upvoted 0 times
...
Louisa
1 months ago
A: Yeah, I agree. Transfer Acceleration seems unnecessary. Just report the failed batch items.
upvoted 0 times
...
...
Carri
2 months ago
Reducing the batch size? That's just going to slow things down even more. Definitely need to increase concurrency and set up that dead-letter queue.
upvoted 0 times
Dahlia
1 months ago
D) Keep the Lambda function's batch size the same. Configure the Lambda function to report failed batch items. Configure an SOS dead-letter queue.
upvoted 0 times
...
Mariko
1 months ago
A) Increase the Lambda function's batch size. Change the SOS standard queue to an SOS FIFO queue. Request a Lambda concurrency increase in the AWS Region.
upvoted 0 times
...
Simona
2 months ago
B) Reduce the Lambda function's batch size. Increase the SOS message throughput quota. Request a Lambda concurrency increase in the AWS Region.
upvoted 0 times
...
...
Alesia
2 months ago
I also think option D is the way to go. It's important to handle invalid data efficiently to meet the processing timeline.
upvoted 0 times
...
Juliann
3 months ago
I agree with you, Clemencia. Reporting failed batch items will help ensure valid messages are processed on time.
upvoted 0 times
...
Clemencia
3 months ago
I think option D is the best solution.
upvoted 0 times
...
Mila
3 months ago
The batch size increase is a good idea, but the FIFO queue change could cause more issues. I'd go with the dead-letter queue option to handle the invalid data.
upvoted 0 times
Carmelina
1 months ago
B: Definitely. Using a dead-letter queue will help with that.
upvoted 0 times
...
Cristy
2 months ago
A: It's important to handle the invalid data properly to meet the processing timeline requirements.
upvoted 0 times
...
Rikki
2 months ago
B: Yeah, I agree. Reporting failed batch items will help ensure that valid messages are processed on time.
upvoted 0 times
...
Hildegarde
2 months ago
A: I think option D is the way to go. Keeping the batch size the same and using a dead-letter queue for invalid data sounds like a good plan.
upvoted 0 times
...
Junita
2 months ago
I agree, but changing to a FIFO queue might not be the best idea. I think using a dead-letter queue for invalid data is a better solution.
upvoted 0 times
...
Janine
3 months ago
I think increasing the batch size could help with the load issue.
upvoted 0 times
...
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77