Cyber Monday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam DAS-C01 Topic 2 Question 90 Discussion

Actual exam question for Amazon's DAS-C01 exam
Question #: 90
Topic #: 2
[All DAS-C01 Questions]

A company uses Amazon EC2 instances to receive files from external vendors throughout each day. At the end of each day, the EC2 instances combine the files into a single file, perform gzip compression, and upload the single file to an Amazon S3 bucket. The total size of all the files is approximately 100 GB each day.

When the files are uploaded to Amazon S3, an AWS Batch job runs a COPY command to load the files into an Amazon Redshift cluster.

Which solution will MOST accelerate the COPY process?

Show Suggested Answer Hide Answer
Suggested Answer: B

Contribute your Thoughts:

Galen
7 months ago
I agree with User1, option D seems like a strategic approach to improve the COPY process by applying sharding.
upvoted 0 times
...
Gracia
7 months ago
I disagree, I believe option B would be more effective because splitting the files to match the number of slices in the Redshift cluster would optimize the COPY process.
upvoted 0 times
...
Tammara
7 months ago
I think option D would be the best solution for accelerating the COPY process.
upvoted 0 times
...
Peggy
8 months ago
That's true. Sharding based on DISTKEY columns could be worth considering.
upvoted 0 times
...
Melissa
8 months ago
But what about option D? Applying sharding could also improve the COPY process.
upvoted 0 times
...
Anastacia
8 months ago
I agree. Splitting the files to match the number of slices in the Redshift cluster makes sense.
upvoted 0 times
...
Peggy
8 months ago
I think option B would be the best solution.
upvoted 0 times
Lashanda
7 months ago
So, yeah, option B seems like the most practical solution for accelerating the COPY process.
upvoted 0 times
...
Lenora
8 months ago
Ultimately, that would lead to faster data loading into the Redshift cluster.
upvoted 0 times
...
Margery
8 months ago
And having the right number of files could improve parallelism during the COPY operation.
upvoted 0 times
...
Melita
8 months ago
It would ensure that the workload is evenly distributed across the cluster.
upvoted 0 times
...
Gerri
8 months ago
That could definitely help optimize the COPY process and make it more efficient.
upvoted 0 times
...
Candra
8 months ago
I agree, splitting the files based on the number of slices in the Redshift cluster makes sense.
upvoted 0 times
...
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77