Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional-Machine-Learning-Engineer Topic 4 Question 81 Discussion

Actual exam question for Google's Google Professional Machine Learning Engineer exam
Question #: 81
Topic #: 4
[All Google Professional Machine Learning Engineer Questions]

You developed a Python module by using Keras to train a regression model. You developed two model architectures, linear regression and deep neural network (DNN). within the same module. You are using the -- raining_method argument to select one of the two methods, and you are using the Learning_rate-and num_hidden_layers arguments in the DNN. You plan to use Vertex Al's hypertuning service with a Budget to perform 100 trials. You want to identify the model architecture and hyperparameter values that minimize training loss and maximize model performance What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: C

Contribute your Thoughts:

Carin
3 months ago
Option A looks tempting, but setting the number of hidden layers as a conditional hyperparameter seems like the way to go. Gotta love that Vertex AI magic!
upvoted 0 times
...
Amalia
3 months ago
I'm leaning towards option D. Doing a 50-trial run to select the architecture, then fine-tuning it, seems like a good compromise between exploration and exploitation.
upvoted 0 times
...
Hubert
3 months ago
Haha, I bet the developer who wrote this question has a lot of experience with hyperparameter tuning. It's like a brain teaser!
upvoted 0 times
Jacki
2 months ago
D
upvoted 0 times
...
Gussie
2 months ago
B
upvoted 0 times
...
Tamekia
2 months ago
A
upvoted 0 times
...
Kirk
2 months ago
D
upvoted 0 times
...
Nelida
2 months ago
C
upvoted 0 times
...
Dorinda
2 months ago
B
upvoted 0 times
...
Arlene
2 months ago
A
upvoted 0 times
...
Elli
2 months ago
A
upvoted 0 times
...
...
Shanda
3 months ago
Option B seems like a lot of work. Why not just do one hypertuning job and let Vertex AI handle the different architectures?
upvoted 0 times
...
Robt
3 months ago
I agree with German. Running one hypertuning job with conditional hyperparameters seems like the best approach.
upvoted 0 times
...
Adaline
3 months ago
I think option C is the best approach. Setting the hyperparameters as conditional on the training method makes the most sense to me.
upvoted 0 times
Sina
2 months ago
True, but option C also ensures that the hyperparameters are optimized based on the selected architecture.
upvoted 0 times
...
Kristal
2 months ago
That's a good point. Maybe running separate jobs for linear regression and DNN could give us a clearer picture.
upvoted 0 times
...
Billye
2 months ago
But wouldn't it be better to compare the two architectures separately like in option B?
upvoted 0 times
...
Thea
2 months ago
I agree, option C seems like the most logical choice.
upvoted 0 times
...
Precious
2 months ago
Yes, setting the hyperparameters as conditional based on the training method can help in finding the best combination for minimizing training loss.
upvoted 0 times
...
Bev
3 months ago
I agree, option C seems like the most efficient way to optimize the model architecture and hyperparameters.
upvoted 0 times
...
...
German
3 months ago
That makes sense. We can optimize both model architecture and hyperparameters that way.
upvoted 0 times
...
Jamal
3 months ago
I disagree. We should run one hypertuning job for 100 trials and set conditional hyperparameters.
upvoted 0 times
...
German
4 months ago
I think we should run two separate hypertuning jobs to compare linear regression and DNN.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77