Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional-Machine-Learning-Engineer Topic 5 Question 75 Discussion

Actual exam question for Google's Google Professional Machine Learning Engineer exam
Question #: 75
Topic #: 5
[All Google Professional Machine Learning Engineer Questions]

You have a custom job that runs on Vertex Al on a weekly basis The job is Implemented using a proprietary ML workflow that produces the datasets. models, and custom artifacts, and sends them to a Cloud Storage bucket Many different versions of the datasets and models were created Due to compliance requirements, your company needs to track which model was used for making a particular prediction, and needs access to the artifacts for each model. How should you configure your workflows to meet these requirement?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

Dorothy
4 months ago
That's true, option C could also work well for this scenario. It's important to consider all the options before making a decision.
upvoted 0 times
...
Vanda
4 months ago
I'm not sure, option C also sounds like a viable solution. Creating context, execution, and artifacts for each model might provide a clear overview of the workflow.
upvoted 0 times
...
Andree
4 months ago
I agree, option A seems like the most efficient way to meet the compliance requirements. It's important to have a structured database to keep track of all the models.
upvoted 0 times
...
Dorothy
5 months ago
I think option A sounds like the best choice for tracking the models used for predictions. Using a ML Metadata database would make it easy to keep everything organized.
upvoted 0 times
...
Rose
5 months ago
Agreed, it seems to provide the level of detail needed for compliance tracking.
upvoted 0 times
...
Marica
6 months ago
I think using the Vertex AI Metadata API might be the most effective method.
upvoted 0 times
...
Major
6 months ago
That could simplify the process of tracking which model was used for predictions.
upvoted 0 times
...
Chau
6 months ago
D) Register each model in Vertex AI Model Registry, and use model labels to store the related dataset and model information.
upvoted 0 times
...
Karma
6 months ago
That seems like a comprehensive way to ensure compliance requirements are met.
upvoted 0 times
...
Major
6 months ago
C) Use the Vertex AI Metadata API inside the custom job to create context, execution, and artifacts for each model, and use events to link them together.
upvoted 0 times
...
Marica
6 months ago
That could work too, but it might not provide as much detailed tracking.
upvoted 0 times
...
Rose
6 months ago
B) Create a Vertex AI experiment, and enable autologging inside the custom job.
upvoted 0 times
...
Karma
6 months ago
That sounds like a good option for tracking the models and datasets.
upvoted 0 times
Carrol
6 months ago
C) Use the Vertex AI Metadata API inside the custom Job to create context, execution, and artifacts for each model, and use events to link them together.
upvoted 0 times
...
Mabel
6 months ago
A) Configure a TensorFlow Extended (TFX) ML Metadata database, and use the ML Metadata API.
upvoted 0 times
...
...
Marica
6 months ago
A) Configure a TensorFlow Extended (TFX) ML Metadata database, and use the ML Metadata API.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77