Cyber Monday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Cloud Architect Topic 3 Question 71 Discussion

Actual exam question for Google's Professional Cloud Architect exam
Question #: 71
Topic #: 3
[All Professional Cloud Architect Questions]

Your company wants to migrate their 10-TB on-premises database export into Cloud Storage You want to minimize the time it takes to complete this activity, the overall cost and database load The bandwidth between the on-premises environment and Google Cloud is 1 Gbps You want to follow Google-recommended practices What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: A

The Data Transfer appliance is a Google-provided hardware device that can be used to transfer large amounts of data from on-premises environments to Cloud Storage. It is suitable for scenarios where the bandwidth between the on-premises environment and Google Cloud is low or insufficient, and the data size is large. The Data Transfer appliance can minimize the time it takes to complete the migration, the overall cost and database load, by avoiding network bottlenecks and reducing bandwidth consumption. The Data Transfer appliance also encrypts the data at rest and in transit, ensuring data security and privacy. The other options are not optimal for this scenario, because they either require a high-bandwidth network connection (B, C, D), or incur additional costs and complexity (B, C). Reference:

https://cloud.google.com/data-transfer-appliance/docs/overview

https://cloud.google.com/blog/products/storage-data-transfer/introducing-storage-transfer-service-for-on-premises-data


Contribute your Thoughts:

Gaston
8 months ago
Ooh, good point! Compression could definitely help. And with multi-threaded copy, we might be able to really maximize that bandwidth. Although, I'm not sure how much the compression would impact the overall upload time. Might be worth testing both options.
upvoted 0 times
...
Evelynn
8 months ago
Definitely, the Dataflow job sounds like the most efficient and cost-effective option. And with a 1Gbps connection, we should be able to get the data moved pretty quickly. Although, I do wonder if compressing the data first and using gsutil might be a bit faster?
upvoted 0 times
...
Janella
8 months ago
Yeah, I agree. The appliance is probably overkill for this size. A commercial ETL solution could work, but that might be more expensive than we need. I'm thinking the Dataflow job might be the way to go - it can handle the data transfer directly and leverage Google's infrastructure.
upvoted 0 times
...
Christa
8 months ago
Hmm, this is an interesting question. I think the key here is to minimize the time and cost while following Google-recommended practices. The Data Transfer appliance could work, but that's more for large-scale migrations, not a 10TB database.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77