Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional-Machine-Learning-Engineer Topic 4 Question 68 Discussion

Actual exam question for Google's Google Professional Machine Learning Engineer exam
Question #: 68
Topic #: 4
[All Google Professional Machine Learning Engineer Questions]

You have trained an XGBoost model that you plan to deploy on Vertex Al for online prediction. You are now uploading your model to Vertex Al Model Registry, and you need to configure the explanation method that will serve online prediction requests to be returned with minimal latency. You also want to be alerted when feature attributions of the model meaningfully change over time. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: A

Contribute your Thoughts:

Melvin
6 months ago
I'm with you guys on this one. Integrated Gradients is a solid choice, but I think the higher path count of 50 is the way to go. As for the monitoring objective, I'd definitely go with training-serving skew. It's going to be way more useful than just tracking prediction drift, which doesn't give you the full picture.
upvoted 0 times
Cheryl
5 months ago
Definitely, training-serving skew provides a more comprehensive view of model performance.
upvoted 0 times
...
Dalene
5 months ago
Yeah, monitoring training-serving skew will give us more insights than just prediction drift.
upvoted 0 times
...
Gilma
5 months ago
I agree, deploying to Vertex AI Endpoints is the way to go.
upvoted 0 times
...
Leslie
6 months ago
I think Integrated Gradients with a path count of 5 is a good choice.
upvoted 0 times
...
Elza
6 months ago
C) 3 Create a Model Monitoring job that uses training-serving skew as the monitoring objective.
upvoted 0 times
...
Jenelle
6 months ago
B) 2 Deploy the model to Vertex AI Endpoints.
upvoted 0 times
...
Twanna
6 months ago
B) 1 Specify Integrated Gradients as the explanation method with a path count of 5.
upvoted 0 times
...
...
Armando
6 months ago
You know, I was thinking the same thing. Integrated Gradients is another good explanation method, but the path count of 50 seems more appropriate to get reliable feature attributions. And using training-serving skew as the monitoring objective is a smart move to stay on top of any changes in the model's behavior over time.
upvoted 0 times
...
Nana
6 months ago
I agree with Theodora. Sampled Shapley can be a good choice, but I think a higher path count is necessary to get meaningful feature attributions. The question also mentions wanting to be alerted when feature attributions change over time, so I would go with the option that uses training-serving skew as the monitoring objective, as that's likely more relevant to detecting changes in feature importance.
upvoted 0 times
...
Theodora
6 months ago
Hmm, this is an interesting question. I think the key here is to choose an explanation method that can provide feature attributions with minimal latency, which is important for online prediction requests. Sampled Shapley seems like a good option, but I'm not sure if a path count of 5 is enough to get accurate feature attributions. I might go with a higher path count, like 50, to ensure more reliable explanations.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77