Cyber Monday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam PAS-C01 Topic 4 Question 44 Discussion

Actual exam question for Amazon's PAS-C01 exam
Question #: 44
Topic #: 4
[All PAS-C01 Questions]

Business users are reporting timeouts during periods of peak query activity on an enterprise SAP HANAdata mart An SAP system administrator has discovered that at peak volume the CPU utilization increases rapidly to 100% for extended periods on the x1.32xlarge Amazon EC2 instance where the database is installed However the SAP HANA database is occupying only 1 120 GiB of the available 1 952 GiB on the instance 10 wart times are not increasing Extensive query tuning and system tuning have not resolved this performance problem

Which solutions should the SAP system administrator use to improve the performance? (Select TWO.)

Show Suggested Answer Hide Answer
Suggested Answer: C, E

Contribute your Thoughts:

Elfriede
2 months ago
Hey, at least they're not reporting timeouts during periods of peak query activity on a non-enterprise SAP HANA data mart. That would be just plain embarrassing.
upvoted 0 times
Val
1 months ago
B) Migrate the SAP HANA database to an EC2 High Memory instance with a larger number of available vCPUs
upvoted 0 times
...
Huey
1 months ago
A) Reduce the global_allocation_limit parameter to i 120 GiB
upvoted 0 times
...
...
Natalya
2 months ago
A scale-out architecture could be a good long-term solution, but it might be overkill for this issue. Let's try the simpler options first.
upvoted 0 times
Lashaun
2 months ago
I agree, starting with these simpler options could help improve performance before considering a scale-out architecture.
upvoted 0 times
...
Bettyann
2 months ago
D) Modify the Amazon Elastic Block Store (Amazon EBS) volume type from General Purpose to Provisioned lOPS for ail SAP HANA data volumes
upvoted 0 times
...
Gladys
2 months ago
B) Migrate the SAP HANA database to an EC2 High Memory instance with a larger number of available vCPUs
upvoted 0 times
...
...
Natalya
2 months ago
I don't know, man. Have you tried turning it off and on again? That usually works, right?
upvoted 0 times
...
Tiera
2 months ago
Reducing the global_allocation_limit seems like a risky move. We don't want to artificially limit the database's memory usage when it's not the root cause of the problem.
upvoted 0 times
Graciela
1 months ago
C) Move to a scale-out architecture for SAP HANA with at least three x1 16xlarge instances
upvoted 0 times
...
German
1 months ago
I agree, reducing the global_allocation_limit could cause more issues. Migrating to a High Memory instance seems like a better solution.
upvoted 0 times
...
Lina
1 months ago
D) Modify the Amazon Elastic Block Store (Amazon EBS) volume type from General Purpose to Provisioned lOPS for ail SAP HANA data volumes
upvoted 0 times
...
Pa
1 months ago
I agree, reducing the global_allocation_limit could cause more issues. Migrating to a High Memory instance and changing the EBS volume type seem like safer options.
upvoted 0 times
...
Joesph
2 months ago
D) Modify the Amazon Elastic Block Store (Amazon EBS) volume type from General Purpose to Provisioned lOPS for all SAP HANA data volumes
upvoted 0 times
...
Marylyn
2 months ago
B) Migrate the SAP HANA database to an EC2 High Memory instance with a larger number of available vCPUs
upvoted 0 times
...
Mireya
2 months ago
B) Migrate the SAP HANA database to an EC2 High Memory instance with a larger number of available vCPUs
upvoted 0 times
...
...
Elfrieda
2 months ago
I'm not sure about option A. Reducing the global_allocation_limit parameter may not address the underlying issue of high CPU utilization.
upvoted 0 times
...
Vallie
3 months ago
I agree with Katheryn. Option C might also be beneficial to distribute the workload across multiple instances.
upvoted 0 times
...
Katheryn
3 months ago
I think option B could help by providing more vCPUs for better performance.
upvoted 0 times
...
Noel
3 months ago
Hmm, I think the solution is to migrate to a larger EC2 instance with more vCPUs. The CPU is maxing out, so we need more horsepower to handle the load.
upvoted 0 times
Matt
2 months ago
B) Migrate the SAP HANA database to an EC2 High Memory instance with a larger number of available vCPUs
upvoted 0 times
...
Daniel
2 months ago
A) Reduce the global_allocation_limit parameter to i 120 GiB
upvoted 0 times
...
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77