Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional-Machine-Learning-Engineer Topic 3 Question 91 Discussion

Actual exam question for Google's Google Professional Machine Learning Engineer exam
Question #: 91
Topic #: 3
[All Google Professional Machine Learning Engineer Questions]

You work for an organization that operates a streaming music service. You have a custom production model that is serving a "next song" recommendation based on a user's recent listening history. Your model is deployed on a Vertex Al endpoint. You recently retrained the same model by using fresh dat

a. The model received positive test results offline. You now want to test the new model in production while minimizing complexity. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: C

Traffic splitting is a feature of Vertex AI that allows you to distribute the prediction requests among multiple models or model versions within the same endpoint. You can specify the percentage of traffic that each model or model version receives, and change it at any time. Traffic splitting can help you test the new model in production without creating a new endpoint or a separate service. You can deploy the new model to the existing Vertex AI endpoint, and use traffic splitting to send 5% of production traffic to the new model. You can monitor the end-user metrics, such as listening time, to compare the performance of the new model and the previous model. If the end-user metrics improve between models over time, you can gradually increase the percentage of production traffic sent to the new model. This solution can help you test the new model in production while minimizing complexity and cost.Reference:

Traffic splitting | Vertex AI

Deploying models to endpoints | Vertex AI


Contribute your Thoughts:

Nakita
3 days ago
I prefer option B. Capturing incoming prediction requests in BigQuery and running batch predictions for both models seems like a thorough approach to compare performance.
upvoted 0 times
...
Thomasena
5 days ago
I'm partial to B - I love a good data-driven experiment! Although, I hope they're not using the 'song selected' metric as the only KPI. Gotta look at the whole user experience.
upvoted 0 times
...
Tasia
7 days ago
Option C is the way to go - it's the 'Goldilocks' solution, not too disruptive, not too risky. Just right!
upvoted 0 times
...
Daren
10 days ago
A reminds me of A/B testing, which is a classic approach. I wonder if the random 5% split might lead to some users getting a suboptimal experience though.
upvoted 0 times
...
Vincenza
12 days ago
D is a clever idea, using monitoring to automatically update the model. But I'm not sure I'd trust the drift detection to work perfectly on the first try.
upvoted 0 times
...
Miesha
13 days ago
I agree with Lorrie. Option A seems like the most efficient way to test the new model while minimizing complexity.
upvoted 0 times
...
Lorrie
18 days ago
I think option A is the best choice. It allows for gradual testing of the new model in production.
upvoted 0 times
...
Ling
24 days ago
B looks interesting, but capturing all the prediction requests in BigQuery could get costly. I'd prefer a more targeted approach like C.
upvoted 0 times
...
Sheldon
25 days ago
Option C seems the most straightforward and minimizes disruption to the production environment. I like how it gradually ramps up the new model's traffic to monitor performance.
upvoted 0 times
Kirk
4 days ago
I think monitoring end-user metrics is key to ensuring the new model is performing well.
upvoted 0 times
...
Lavera
6 days ago
I agree. It's important to monitor performance before fully deploying the new model.
upvoted 0 times
...
Oretha
6 days ago
I agree. It's important to monitor the end-user metrics and gradually increase the traffic to the new model to ensure it performs well.
upvoted 0 times
...
Latricia
14 days ago
Option C seems like a good choice. It allows for gradual testing of the new model.
upvoted 0 times
...
Deangelo
22 days ago
Option C seems like the best approach. It allows for gradual testing of the new model without causing too much disruption.
upvoted 0 times
...
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77