Cyber Monday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Machine Learning Engineer Topic 4 Question 73 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 73
Topic #: 4
[All Professional Machine Learning Engineer Questions]

You have a custom job that runs on Vertex Al on a weekly basis The job is Implemented using a proprietary ML workflow that produces the datasets. models, and custom artifacts, and sends them to a Cloud Storage bucket Many different versions of the datasets and models were created Due to compliance requirements, your company needs to track which model was used for making a particular prediction, and needs access to the artifacts for each model. How should you configure your workflows to meet these requirement?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

Lauran
6 months ago
I personally think option C) Use the Vertex AI Metadata API inside the custom Job is the most efficient solution.
upvoted 0 times
...
Gwenn
6 months ago
I disagree, I believe option D) Register each model in Vertex AI Model Registry is the way to go.
upvoted 0 times
...
Brett
7 months ago
I think the best option is A) Configure a TensorFlow Extended (TFX) ML Metadata database, and use the ML Metadata API.
upvoted 0 times
...
Naomi
7 months ago
I'm not sure, but option C sounds good to me. Using Vertex AI Metadata API inside the job seems like a practical approach.
upvoted 0 times
...
Thomasena
7 months ago
I agree with Georgeanna. It's important to have a centralized database to keep track of all the models and datasets.
upvoted 0 times
...
Georgeanna
7 months ago
I think option A is the best choice. By using TFX ML Metadata database, we can easily track the model used for predictions.
upvoted 0 times
...
Twana
8 months ago
Hmm, I'm not sure about option B. Relying on autologging in Vertex AI may not give us enough control over the metadata. And a separate TFX metadata database (option A) sounds like overkill for this use case.
upvoted 0 times
...
Eladia
8 months ago
Option D also sounds promising - registering the models in the Vertex AI Model Registry and using labels could be a simple way to manage the versioning and provenance. But I'm not sure how robust that would be for a complex workflow.
upvoted 0 times
...
Honey
8 months ago
I'm leaning towards option C. Using the Vertex AI Metadata API seems like the most direct way to link the models, datasets, and artifacts together. Plus, we can create custom context and execution details to meet the compliance needs.
upvoted 0 times
Emerson
7 months ago
I'm convinced, option C it is!
upvoted 0 times
...
Shawnee
7 months ago
It's definitely a more structured way to track model usage and artifacts.
upvoted 0 times
...
Brande
8 months ago
Using events to link everything sounds like a good approach.
upvoted 0 times
...
Alona
8 months ago
That makes sense, it's important to link all the necessary information together.
upvoted 0 times
...
Kristeen
8 months ago
We can create custom context and execution details as needed for compliance.
upvoted 0 times
...
Norah
8 months ago
Agreed, using the Vertex AI Metadata API seems like the most direct solution.
upvoted 0 times
...
Truman
8 months ago
I think option C is the best choice.
upvoted 0 times
...
...
Chaya
8 months ago
Whoa, this question looks like a real brain-teaser! We definitely need to track the models and artifacts for compliance, but it's not clear which option is the best approach.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77