Cyber Monday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Machine Learning Engineer Topic 6 Question 71 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 71
Topic #: 6
[All Professional Machine Learning Engineer Questions]

You recently deployed a model lo a Vertex Al endpoint and set up online serving in Vertex Al Feature Store. You have configured a daily batch ingestion job to update your featurestore During the batch ingestion jobs you discover that CPU utilization is high in your featurestores online serving nodes and that feature retrieval latency is high. You need to improve online serving performance during the daily batch ingestion. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: B

Vertex AI Feature Store provides two options for online serving: Bigtable and optimized online serving. Both options support autoscaling, which means that the number of online serving nodes can automatically adjust to the traffic demand. By enabling autoscaling, you can improve the online serving performance and reduce the feature retrieval latency during the daily batch ingestion. Autoscaling also helps you optimize the cost and resource utilization of your featurestore.Reference:

Online serving | Vertex AI | Google Cloud

New Vertex AI Feature Store: BigQuery-Powered, GenAI-Ready | Google Cloud Blog


Contribute your Thoughts:

Sanjuana
6 months ago
I think scheduling an increase in the number of online serving nodes could also help.
upvoted 0 times
...
Slyvia
6 months ago
I disagree. I believe increasing the worker counts in the batch ingestion job would be more effective.
upvoted 0 times
...
Isadora
7 months ago
That sounds like a good idea. It could help improve performance during the batch ingestion.
upvoted 0 times
...
Larue
7 months ago
I think we should enable autoscaling of the online serving nodes.
upvoted 0 times
...
Jina
8 months ago
But what about option D? Increasing the worker counts in the batch ingestion job could also help distribute the load and reduce the impact on online serving, no?
upvoted 0 times
Walton
7 months ago
D) Increasing the worker counts in the batch ingestion job could also help distribute the load and reduce the impact on online serving.
upvoted 0 times
...
Vi
8 months ago
A) Schedule an increase in the number of online serving nodes in your featurestore prior to the batch ingestion jobs.
upvoted 0 times
...
Percy
8 months ago
D) Increase the worker counts in the importFeaturevalues request of your batch ingestion job.
upvoted 0 times
...
Shelba
8 months ago
C) Enable autoscaling for the prediction nodes of your DeployedModel in the Vertex AI endpoint.
upvoted 0 times
...
Odette
8 months ago
B) Enable autoscaling of the online serving nodes in your featurestore
upvoted 0 times
...
Mitzie
8 months ago
A) Schedule an increase in the number of online serving nodes in your featurestore prior to the batch ingestion jobs.
upvoted 0 times
...
...
Maia
8 months ago
I'm leaning towards option B, enabling autoscaling of the online serving nodes. That should help the featurestore handle the increased load without us having to manually adjust the node count.
upvoted 0 times
...
Sherly
8 months ago
Ha, imagine if we had to manually adjust the worker counts every time. 'Okay, everyone, stop what you're doing, it's time for the daily batch ingestion! Quick, someone count the nodes and tell me how many workers we need!'
upvoted 0 times
...
Corrinne
8 months ago
Hmm, I'm not sure. Autoscaling seems like the more elegant solution to me. I don't want to have to manually adjust the worker counts every time we have a batch ingestion job.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77