You work for an organization that operates a streaming music service. You have a custom production model that is serving a "next song" recommendation based on a user's recent listening history. Your model is deployed on a Vertex Al endpoint. You recently retrained the same model by using fresh dat
a. The model received positive test results offline. You now want to test the new model in production while minimizing complexity. What should you do?
Traffic splitting is a feature of Vertex AI that allows you to distribute the prediction requests among multiple models or model versions within the same endpoint. You can specify the percentage of traffic that each model or model version receives, and change it at any time. Traffic splitting can help you test the new model in production without creating a new endpoint or a separate service. You can deploy the new model to the existing Vertex AI endpoint, and use traffic splitting to send 5% of production traffic to the new model. You can monitor the end-user metrics, such as listening time, to compare the performance of the new model and the previous model. If the end-user metrics improve between models over time, you can gradually increase the percentage of production traffic sent to the new model. This solution can help you test the new model in production while minimizing complexity and cost.Reference:
Deploying models to endpoints | Vertex AI
Nakita
2 months agoThomasena
2 months agoTasia
2 months agoDaren
2 months agoTashia
29 days agoIsadora
1 months agoJosue
1 months agoMelynda
1 months agoVincenza
2 months agoVince
1 months agoCandra
1 months agoArlean
2 months agoMiesha
2 months agoLorrie
3 months agoLing
3 months agoSheldon
3 months agoEun
2 months agoKirk
2 months agoLavera
2 months agoOretha
2 months agoLatricia
3 months agoDeangelo
3 months ago