Cyber Monday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Machine Learning Engineer Topic 4 Question 74 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 74
Topic #: 4
[All Professional Machine Learning Engineer Questions]

You have trained an XGBoost model that you plan to deploy on Vertex Al for online prediction. You are now uploading your model to Vertex Al Model Registry, and you need to configure the explanation method that will serve online prediction requests to be returned with minimal latency. You also want to be alerted when feature attributions of the model meaningfully change over time. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: A

Sampled Shapley is a fast and scalable approximation of the Shapley value, which is a game-theoretic concept that measures the contribution of each feature to the model prediction. Sampled Shapley is suitable for online prediction requests, as it can return feature attributions with minimal latency. The path count parameter controls the number of samples used to estimate the Shapley value, and a lower value means faster computation. Integrated Gradients is another explanation method that computes the average gradient along the path from a baseline input to the actual input. Integrated Gradients is more accurate than Sampled Shapley, but also more computationally intensive. Therefore, it is not recommended for online prediction requests, especially with a high path count. Prediction drift is the change in the distribution of feature values or labels over time. It can affect the performance and accuracy of the model, and may require retraining or redeploying the model. Vertex AI Model Monitoring allows you to monitor prediction drift on your deployed models and endpoints, and set up alerts and notifications when the drift exceeds a certain threshold. You can specify an email address to receive the notifications, and use the information to retrigger the training pipeline and deploy an updated version of your model. This is the most direct and convenient way to achieve your goal. Training-serving skew is the difference between the data used for training the model and the data used for serving the model. It can also affect the performance and accuracy of the model, and may indicate data quality issues or model staleness. Vertex AI Model Monitoring allows you to monitor training-serving skew on your deployed models and endpoints, and set up alerts and notifications when the skew exceeds a certain threshold. However, this is not relevant to the question, as the question is about the feature attributions of the model, not the data distribution.Reference:

Vertex AI: Explanation methods

Vertex AI: Configuring explanations

Vertex AI: Monitoring prediction drift

Vertex AI: Monitoring training-serving skew


Contribute your Thoughts:

Antonio
6 months ago
I'm leaning towards option D for using Integrated Gradients with a path count of 50.
upvoted 0 times
...
Sharen
6 months ago
I prefer option C as it focuses on monitoring training-serving skew.
upvoted 0 times
...
Margo
6 months ago
I disagree, I believe option B is the way to go for minimal latency.
upvoted 0 times
...
Kassandra
6 months ago
I think the best approach would be to choose option A.
upvoted 0 times
...
Veronika
6 months ago
I think we're all on the same page then. A it is.
upvoted 0 times
...
Dean
7 months ago
Exactly. Creating a Model Monitoring job for prediction drift is essential for staying alert.
upvoted 0 times
...
Florinda
7 months ago
But what about monitoring for feature attributions changes over time? Option A covers that too.
upvoted 0 times
...
Veronika
7 months ago
I agree. Sampled Shapley with a path count of 5 sounds like the best choice.
upvoted 0 times
...
Dean
7 months ago
I think for online prediction with minimal latency, we should go with option A.
upvoted 0 times
...
Sarah
8 months ago
Hmm, I'm not sure about the training-serving skew approach. Wouldn't prediction drift be a more direct way to monitor changes in the feature attributions? I'm leaning more towards option B, which uses Integrated Gradients and prediction drift monitoring.
upvoted 0 times
...
Sherita
8 months ago
Ah, I see. So the full answer would be option C then - sampled Shapley with a path count of 50, deploy to Vertex AI Endpoints, and set up a Model Monitoring job using prediction drift.
upvoted 0 times
...
Lenny
8 months ago
Yep, that's my pick as well. Although, I have to say, I'm a little worried about the path count of 50. That's going to add some serious latency to our predictions. Maybe we could start with 5 and see how it goes?
upvoted 0 times
...
Merilyn
8 months ago
Haha, yeah, you don't want to cheap out on the Shapley path count. That's like trying to save a few bucks on your car tires - not a good idea! But I do like the idea of using training-serving skew for the monitoring objective. That could help us catch any shifts in the data distribution over time.
upvoted 0 times
...
Marta
8 months ago
Haha, yeah, no need to go overboard on the path count. As long as we're getting good feature attributions, that's the main thing. And hey, at least we're not doing Integrated Gradients - I hear that can be a real performance hog!
upvoted 0 times
Graciela
7 months ago
Haha, true! Sampled Shapley with a path count of 5 should be good enough for us.
upvoted 0 times
...
Graciela
7 months ago
A
upvoted 0 times
...
...
Maile
8 months ago
I don't know, I'm a bit skeptical about option A. A path count of 5 for Shapley might not be enough to really capture the feature importance accurately. Maybe we should go with option C and use a higher path count of 50 instead?
upvoted 0 times
...
Bea
8 months ago
Yeah, that makes sense. I'm leaning towards option A, which specifies sampled Shapley as the explanation method with a path count of 5. That should give us reasonably accurate feature attributions without too much overhead. And the prediction drift monitoring objective seems like a good fit for what we're trying to achieve.
upvoted 0 times
...
Pura
8 months ago
Hmm, let's think this through. I think the key is to choose an explanation method that can provide meaningful feature attributions with minimal latency for the online prediction requests. And we also want to set up monitoring to detect changes in the feature attributions over time.
upvoted 0 times
...
Polly
8 months ago
Wait, so we need to pick the right explanation method and monitoring objective for this XGBoost model deployment on Vertex AI? This seems like a tricky question. I'm not sure if I fully understand the different explanation methods and monitoring objectives mentioned.
upvoted 0 times
King
8 months ago
D
upvoted 0 times
...
Trinidad
8 months ago
C
upvoted 0 times
...
Barbra
8 months ago
B
upvoted 0 times
...
Dierdre
8 months ago
A
upvoted 0 times
...
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77