Cyber Monday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Cloud Architect Topic 6 Question 94 Discussion

Actual exam question for Google's Professional Cloud Architect exam
Question #: 94
Topic #: 6
[All Professional Cloud Architect Questions]

Your company has a stateless web API that performs scientific calculations. The web API runs on a single Google Kubernetes Engine (GKE) cluster. The cluster is currently deployed in us-central1. Your company has expanded to offer your API to customers in Asi

a. You want to reduce the latency for the users in Asia. What should you do?

Show Suggested Answer Hide Answer

Contribute your Thoughts:

Mike
2 months ago
I'm going to have to go with B. Gotta love that good old-fashioned multi-region architecture, even if it's not as fancy as the kubemci solution.
upvoted 0 times
Eugene
1 months ago
User 3: Agreed, having multiple clusters in different regions definitely helps with performance optimization.
upvoted 0 times
...
Zita
1 months ago
Yeah, B sounds like a solid choice. Setting up a second GKE cluster in asia-southeast1 makes sense for reducing latency in Asia.
upvoted 0 times
...
Altha
1 months ago
I'm going to have to go with B. Gotta love that good old-fashioned multi-region architecture, even if it's not as fancy as the kubemci solution.
upvoted 0 times
...
...
Myrtie
2 months ago
A is the obvious choice here. Who needs multiple clusters and load balancers when you can just enable Cloud CDN and let Google handle the global distribution for you?
upvoted 0 times
...
Junita
2 months ago
Haha, C is a classic 'throw more hardware at it' answer. As if increasing the resources on a single cluster is going to magically reduce latency for users halfway across the world.
upvoted 0 times
...
Renea
2 months ago
D looks good to me. Using kubemci to create a global HTTP(s) load balancer sounds like a more elegant solution than managing multiple load balancers.
upvoted 0 times
Edelmira
1 months ago
Agreed, it would definitely help reduce latency for users in Asia.
upvoted 0 times
...
Ricarda
1 months ago
Yeah, using kubemci to create a global HTTP(s) load balancer seems like a smart choice.
upvoted 0 times
...
Cristy
2 months ago
I think D is the best option.
upvoted 0 times
...
...
Edelmira
3 months ago
I think increasing the memory and CPU allocated to the application in the cluster could also help reduce latency for users in Asia.
upvoted 0 times
...
Lyndia
3 months ago
I disagree, I believe creating a second GKE cluster in asia-southeast1 and exposing both APIs using a Service of type Load Balancer would be more effective.
upvoted 0 times
...
Antione
3 months ago
I think the answer is B. Creating a second GKE cluster in Asia and using a Load Balancer Service to expose both APIs seems like the best way to reduce latency for the Asian users.
upvoted 0 times
Tasia
1 months ago
D
upvoted 0 times
...
Wilburn
2 months ago
B is the correct answer. By creating a second GKE cluster in Asia and using a Load Balancer Service, you can reduce latency for Asian users.
upvoted 0 times
...
Annmarie
2 months ago
B
upvoted 0 times
...
Malinda
2 months ago
A
upvoted 0 times
...
Alysa
2 months ago
B) Create a second GKE cluster in asia-southeast1, and expose both API's using a Service of type Load Balancer. Add the public Ips to the Cloud DNS zone
upvoted 0 times
...
Margot
3 months ago
A) Use a global HTTP(s) load balancer with Cloud CDN enabled
upvoted 0 times
...
...
Casie
3 months ago
I think we should use a global HTTP(s) load balancer with Cloud CDN enabled to reduce latency for users in Asia.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77