Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional-Cloud-Database-Engineer Topic 6 Question 39 Discussion

Actual exam question for Google's Professional Cloud Database Engineer exam
Question #: 39
Topic #: 6
[All Professional Cloud Database Engineer Questions]

You want to migrate an on-premises mission-critical PostgreSQL database to Cloud SQL. The database must be able to withstand a zonal failure with less than five minutes of downtime and still not lose any transactions. You want to follow Google-recommended practices for the migration. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

Lettie
1 months ago
B) CDC all the way! I bet the developers at Google are using it to keep their own databases running smoothly.
upvoted 0 times
...
Jesusita
1 months ago
C) A read replica in another region? Sounds like a good idea, but will it really withstand a zonal failure? I'm not convinced.
upvoted 0 times
Dalene
10 days ago
C) A read replica in another region? Sounds like a good idea, but will it really withstand a zonal failure? I'm not convinced.
upvoted 0 times
...
Josphine
14 days ago
B) Build a change data capture (CDC) pipeline to read transactions from the primary instance, and replicate them to a secondary instance.
upvoted 0 times
...
Delisa
16 days ago
A) Take nightly snapshots of the primary database instance, and restore them in a secondary zone.
upvoted 0 times
...
...
Merissa
2 months ago
A) Snapshots? Really? That's so 90s. We need a more modern approach for a cloud migration.
upvoted 0 times
Richelle
26 days ago
D) I agree, high availability is crucial for mission-critical databases in the cloud.
upvoted 0 times
...
Ronald
30 days ago
C) It might be a bit complex, but it's the best way to ensure minimal downtime and no data loss.
upvoted 0 times
...
Alonzo
1 months ago
A) But won't that be complex to set up and maintain?
upvoted 0 times
...
Juliana
1 months ago
B) Build a change data capture (CDC) pipeline to read transactions from the primary instance, and replicate them to a secondary instance.
upvoted 0 times
...
...
Joanna
2 months ago
D) Hmm, I'm not sure. Enabling HA sounds like the easy way out. Google-recommended practices are usually more robust.
upvoted 0 times
...
Vicki
2 months ago
B) Definitely. A CDC pipeline is the way to go for mission-critical workloads. Ensures zero data loss and minimal downtime.
upvoted 0 times
Chan
1 months ago
B) Definitely. A CDC pipeline is the way to go for mission-critical workloads. Ensures zero data loss and minimal downtime.
upvoted 0 times
...
Ailene
2 months ago
A) Take nightly snapshots of the primary database instance, and restore them in a secondary zone.
upvoted 0 times
...
...
Dyan
2 months ago
I see your point, Lura, but I think option C is also a good choice. Creating a read replica in another region provides additional redundancy.
upvoted 0 times
...
Lura
2 months ago
I disagree, I believe option D is the way to go. Enabling high availability will ensure the database is regional and can withstand zonal failures.
upvoted 0 times
...
Leonardo
2 months ago
I think option A is the best choice because taking nightly snapshots ensures we have up-to-date data in a secondary zone.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77