Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional-Machine-Learning-Engineer Topic 6 Question 82 Discussion

Actual exam question for Google's Google Professional Machine Learning Engineer exam
Question #: 82
Topic #: 6
[All Google Professional Machine Learning Engineer Questions]

You have a custom job that runs on Vertex Al on a weekly basis The job is Implemented using a proprietary ML workflow that produces the datasets. models, and custom artifacts, and sends them to a Cloud Storage bucket Many different versions of the datasets and models were created Due to compliance requirements, your company needs to track which model was used for making a particular prediction, and needs access to the artifacts for each model. How should you configure your workflows to meet these requirement?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

Franklyn
2 months ago
Option B seems like the easy way out, but I don't think it fully addresses the compliance requirements. I'd go with Option C for the win.
upvoted 0 times
...
Doyle
2 months ago
Hold up, are we sure we can't just use a spreadsheet for this? I kid, I kid. Option C seems like the right call - gotta love that Vertex AI Metadata API!
upvoted 0 times
Elvera
1 months ago
Yeah, using the Vertex AI Metadata API inside the custom job sounds like the best way to track everything.
upvoted 0 times
...
Justine
2 months ago
Option C seems like the right call - gotta love that Vertex AI Metadata API!
upvoted 0 times
...
...
Freeman
2 months ago
I think using the Vertex AI Metadata API inside the custom job could be a more efficient way to meet the requirements.
upvoted 0 times
...
Quentin
2 months ago
This is a tricky one, but I think Option C is the way to go. Manually tracking the model provenance using the Metadata API seems like the most robust solution.
upvoted 0 times
Brendan
1 months ago
Using the ML Metadata API with Option A might be a good choice too, it's all about what works best for your workflow.
upvoted 0 times
...
Cathrine
1 months ago
I think Option D could also work well, registering each model in the Model Registry seems organized.
upvoted 0 times
...
Annmarie
1 months ago
I agree, Option C sounds like the most reliable way to track the model provenance.
upvoted 0 times
...
...
Lai
2 months ago
I believe registering each model in Vertex AI Model Registry could also be a good option.
upvoted 0 times
...
Celestina
2 months ago
I'm leaning towards Option D. Registering the models in the Vertex AI Model Registry and using labels to store the related information sounds like a more straightforward solution.
upvoted 0 times
Lynelle
1 months ago
True, but using Vertex AI Model Registry seems more efficient for this scenario.
upvoted 0 times
...
Paola
2 months ago
Configuring a TensorFlow Extended (TFX) ML Metadata database could also work for tracking the models.
upvoted 0 times
...
Dorthy
2 months ago
I agree, using Vertex AI Model Registry and labels would make it easier to manage.
upvoted 0 times
...
Augustine
2 months ago
Option D seems like the best choice for tracking the models and related information.
upvoted 0 times
...
...
Dyan
3 months ago
Option C seems like the best choice here. Using the Vertex AI Metadata API to track the model provenance and link it to the artifacts is a solid approach that meets the compliance requirements.
upvoted 0 times
Carry
22 days ago
Yes, configuring the workflows to use the Vertex AI Metadata API inside the custom job will provide the necessary context, execution, and artifacts for each model, meeting the compliance requirements.
upvoted 0 times
...
Devorah
24 days ago
I agree, Option C sounds like the most efficient way to ensure the company can track which model was used for each prediction.
upvoted 0 times
...
Sharmaine
25 days ago
Option C seems like the best choice here. Using the Vertex AI Metadata API to track the model provenance and link it to the artifacts is a solid approach that meets the compliance requirements.
upvoted 0 times
...
Ira
1 months ago
Yes, it's important to have a system in place to meet compliance requirements and ensure the traceability of models.
upvoted 0 times
...
Marci
1 months ago
Agreed, using the Vertex AI Metadata API inside the custom job seems like the most efficient solution.
upvoted 0 times
...
Annamaria
1 months ago
I think option C is the way to go. It allows you to track the model provenance and link it to the artifacts.
upvoted 0 times
...
...
Shelba
3 months ago
I agree with Herschel, using the ML Metadata API would help us track the models.
upvoted 0 times
...
Herschel
3 months ago
I think we should configure a TensorFlow Extended (TFX) ML Metadata database.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77