Cyber Monday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Cloud Developer Topic 14 Question 91 Discussion

Actual exam question for Google's Professional Cloud Developer exam
Question #: 91
Topic #: 14
[All Professional Cloud Developer Questions]

You are developing an online gaming platform as a microservices application on Google Kubernetes Engine (GKE). Users on social media are complaining about long loading times for certain URL requests to the application. You need to investigate performance bottlenecks in the application and identify which HTTP requests have a significantly high latency span in user requests. What should you do9

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

Jeanice
2 months ago
Hold up, where's the 'Hire a psychic' option? I bet they could just sense the performance bottlenecks and solve the problem instantly.
upvoted 0 times
...
Filiberto
2 months ago
Option B with tcpdump? That's so old-school, I thought we were beyond that in the Kubernetes era. Might as well use a telegraph to debug your application.
upvoted 0 times
Selene
26 days ago
D) Configure GKE workload metrics using kubect1. Select all Pods to send their metrics to Cloud Monitoring. Create a custom dashboard of application metrics in Cloud Monitoring to determine performance bottlenecks of your GKE cluster.
upvoted 0 times
...
Marnie
28 days ago
C) Instrument your microservices by installing the Open Telemetry tracing package. Update your application code to send traces to Trace for inspection and analysis. Create an analysis report on Trace to analyze user requests
upvoted 0 times
...
Katina
1 months ago
A) Update your microservices lo log HTTP request methods and URL paths to STDOUT Use the logs router to send container logs to Cloud Logging. Create fillers in Cloud Logging to evaluate the latency of user requests across different methods and URL paths.
upvoted 0 times
...
...
Ngoc
2 months ago
Configuring GKE workload metrics in Cloud Monitoring could also help us identify performance bottlenecks.
upvoted 0 times
...
Valentin
2 months ago
I think we should also consider installing the Open Telemetry tracing package to analyze user requests.
upvoted 0 times
...
Rupert
2 months ago
Hmm, option D seems the most comprehensive. Monitoring the GKE cluster metrics in Cloud Monitoring could give you a wide range of insights into the performance issues.
upvoted 0 times
...
Azalee
2 months ago
I'd go with option A. Logging is a good start, and analyzing the logs in Cloud Logging should give you a good idea of which requests are taking too long.
upvoted 0 times
Stevie
1 months ago
C) Instrument your microservices by installing the Open Telemetry tracing package. Update your application code to send traces to Trace for inspection and analysis. Create an analysis report on Trace to analyze user requests
upvoted 0 times
...
Malcolm
1 months ago
B) Install tcpdiimp on your GKE nodes. Run tcpdunm-- to capture network traffic over an extended period of time to collect data. Analyze the data files using Wireshark to determine the cause of high latency
upvoted 0 times
...
Loreen
1 months ago
Loreen: I'd go with option A. Logging is a good start, and analyzing the logs in Cloud Logging should give you a good idea of which requests are taking too long.
upvoted 0 times
...
Giovanna
2 months ago
A) Update your microservices lo log HTTP request methods and URL paths to STDOUT Use the logs router to send container logs to Cloud Logging. Create fillers in Cloud Logging to evaluate the latency of user requests across different methods and URL paths.
upvoted 0 times
...
...
Hyman
2 months ago
Option C sounds like the way to go. Tracing is crucial for identifying performance bottlenecks in a microservices architecture. I'm glad they mentioned Open Telemetry, it's a great tool for this.
upvoted 0 times
Ezekiel
1 months ago
Absolutely, using Open Telemetry for tracing can help pinpoint where the bottlenecks are occurring and improve the overall user experience on the gaming platform.
upvoted 0 times
...
Elden
1 months ago
I agree, tracing with Open Telemetry can provide detailed insights into the latency of user requests. It's essential for optimizing performance in a microservices environment.
upvoted 0 times
...
Kyoko
1 months ago
Option C sounds like the way to go. Tracing is crucial for identifying performance bottlenecks in a microservices architecture. I'm glad they mentioned Open Telemetry, it's a great tool for this.
upvoted 0 times
...
...
Golda
3 months ago
I agree with Trinidad. Using logs to evaluate latency across different methods and URL paths is a good idea.
upvoted 0 times
...
Trinidad
3 months ago
I think we should update our microservices to log HTTP requests and analyze the latency.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77