You have recently developed a new ML model in a Jupyter notebook. You want to establish a reliable and repeatable model training process that tracks the versions and lineage of your model artifacts. You plan to retrain your model weekly. How should you operationalize your training process?
The best way to operationalize your training process is to use Vertex AI Pipelines, which allows you to create and run scalable, portable, and reproducible workflows for your ML models. Vertex AI Pipelines also integrates with Vertex AI Metadata, which tracks the provenance, lineage, and artifacts of your ML models. By using a Vertex AI CustomTrainingJobOp component, you can train your model using the same code as in your Jupyter notebook. By using a ModelUploadOp component, you can upload your trained model to Vertex AI Model Registry, which manages the versions and endpoints of your models. By using Cloud Scheduler and Cloud Functions, you can trigger your Vertex AI pipeline to run weekly, according to your plan.Reference:
Vertex AI Pipelines documentation
Vertex AI Metadata documentation
Vertex AI CustomTrainingJobOp documentation
[Cloud Functions documentation]
Danica
6 months agoJenise
5 months agoGail
5 months agoGlendora
5 months agoFrancisca
6 months agoTamera
6 months agoCyril
6 months agoLindsey
6 months agoLashawnda
5 months agoVivienne
5 months agoVanda
6 months agoMiriam
6 months agoChauncey
5 months agoChristene
5 months agoAnnmarie
6 months agoDawne
6 months agoMerissa
6 months ago