Cyber Monday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam DVA-C02 Topic 7 Question 36 Discussion

Actual exam question for Amazon's DVA-C02 exam
Question #: 36
Topic #: 7
[All DVA-C02 Questions]

A company built an online event platform For each event the company organizes quizzes and generates leaderboards that are based on the quiz scores. The company stores the leaderboard data in Amazon DynamoDB and retains the data for 30 days after an event is complete The company then uses a scheduled job to delete the old leaderboard data

The DynamoDB table is configured with a fixed write capacity. During the months when many events occur, the DynamoDB write API requests are throttled when the scheduled delete job runs.

A developer must create a long-term solution that deletes the old leaderboard data and optimizes write throughput

Which solution meets these requirements?

Show Suggested Answer Hide Answer
Suggested Answer: A

DynamoDB TTL (Time-to-Live):A native feature that automatically deletes items after a specified expiration time.

Efficiency:Eliminates the need for scheduled deletion jobs, optimizing write throughput by avoiding potential throttling conflicts.

Seamless Integration:TTL works directly within DynamoDB, requiring minimal development overhead.


DynamoDB TTL Documentation:https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html

Contribute your Thoughts:

Bea
2 months ago
Option A is the clear winner here. TTL is built for this kind of use case. Set it and forget it, baby!
upvoted 0 times
Cathrine
25 days ago
Yeah, it's definitely the easiest way to automatically delete old data and optimize write throughput.
upvoted 0 times
...
Melissia
1 months ago
I agree, using a TTL attribute for the leaderboard data seems like the most efficient solution.
upvoted 0 times
...
Gracia
1 months ago
Option A is the clear winner here. TTL is built for this kind of use case. Set it and forget it, baby!
upvoted 0 times
...
...
Norah
2 months ago
D? Really? Increasing write capacity just to accommodate a scheduled delete job? That's like using a sledgehammer to crack a nut.
upvoted 0 times
...
Johnna
2 months ago
Hmm, I'm torn between B and C. Why not just use a serverless function triggered by a CloudWatch event? That's a simple yet effective solution.
upvoted 0 times
Jesus
1 months ago
A: Yeah, that sounds like a reliable solution for the company's needs.
upvoted 0 times
...
Angelica
1 months ago
B: I agree, it would help optimize write throughput and ensure old data is deleted in a timely manner.
upvoted 0 times
...
Bok
2 months ago
A: I think using DynamoDB Streams to schedule and delete the leaderboard data is the best option.
upvoted 0 times
...
...
Penney
2 months ago
I'd say C is the best choice. Step Functions can handle the scheduling and orchestration of the deletion process more robustly.
upvoted 0 times
Tonja
28 days ago
Setting a higher write capacity might help with throttling, but it doesn't address the long-term solution for deleting old data.
upvoted 0 times
...
Octavio
29 days ago
That's true. Step Functions can handle the scheduling and deletion in a more organized way.
upvoted 0 times
...
Nu
2 months ago
But with DynamoDB Streams, you can have more control over the deletion process and ensure it runs smoothly.
upvoted 0 times
...
Thurman
2 months ago
I think A could work too. Setting a TTL attribute would automatically delete the old data after 30 days.
upvoted 0 times
...
...
Rosendo
3 months ago
I'm not sure about option B) or C), but setting a higher write capacity with option D) could also work.
upvoted 0 times
...
Talia
3 months ago
Option B is the way to go. DynamoDB Streams make it easy to trigger a function to delete the old data without affecting write throughput.
upvoted 0 times
Valda
2 months ago
That makes sense. It's important to optimize write throughput while deleting old data.
upvoted 0 times
...
Roselle
2 months ago
I agree, using DynamoDB Streams sounds like the best solution for this scenario.
upvoted 0 times
...
Matthew
2 months ago
Option B is the way to go. DynamoDB Streams make it easy to trigger a function to delete the old data without affecting write throughput.
upvoted 0 times
...
...
Jaclyn
3 months ago
I agree with Verona. Using TTL would automatically delete the old data and optimize write throughput.
upvoted 0 times
...
Verona
3 months ago
I think option A) Configure a TTL attribute for the leaderboard data would be a good solution.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77