Cyber Monday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Salesforce Exam Data Architect Topic 1 Question 52 Discussion

Actual exam question for Salesforce's Data Architect exam
Question #: 52
Topic #: 1
[All Data Architect Questions]

Universal Containers has been a customer of Salesforce for 10 years. Currently they have 2 million accounts in the system. Due to an erroneous integration built 3 years ago, it is estimated there are 500,000 duplicates in the system.

Which solution should a data architect recommend to remediate the duplication issue?

Show Suggested Answer Hide Answer
Suggested Answer: D

Implementing duplicate rules (option D) is the best solution to remediate the duplication issue, as it allows the data architect to identify and merge duplicate accounts in Salesforce using native features and tools. Developing an ETL process that utilizes the merge API to merge the duplicate records (option A) is not a good solution, as it may require more coding and testing effort, and it does not prevent duplicates from being created in Salesforce. Utilizing a data warehouse as the system of truth (option B) is also not a good solution, as it may introduce additional complexity and cost, and it does not address the duplication issue in Salesforce. Extracting the data using data loader and using excel to merge the duplicate records (option C) is also not a good solution, as it may be time-consuming and error-prone, and it does not prevent duplicates from being created in Salesforce.


Contribute your Thoughts:

Gladys
3 days ago
I'd go with option D. Implement duplicate rules and let Salesforce handle the deduplication automatically. Easy peasy!
upvoted 0 times
...
Alona
7 days ago
Haha, using a data warehouse as the 'system of truth' for Salesforce data? That's like using a sledgehammer to crack a nut. Option B is way overkill.
upvoted 0 times
...
Cathern
10 days ago
Option C sounds too manual and time-consuming for a 2 million record dataset. Excel is not the right tool for this kind of large-scale data cleanup.
upvoted 0 times
...
Ty
16 days ago
But wouldn't using data loader and excel be a quicker fix?
upvoted 0 times
...
Kerrie
20 days ago
I think option A is the best choice. Using the Salesforce Merge API to automate the deduplication process is the most comprehensive solution here.
upvoted 0 times
...
Bambi
22 days ago
Definitely option D. Implementing duplicate rules is the way to go for a large dataset like this. It's a scalable and efficient solution.
upvoted 0 times
Hoa
10 days ago
I agree, option D is definitely the best choice for this situation.
upvoted 0 times
...
...
Leila
25 days ago
I disagree, I believe implementing duplicate rules would be a more efficient solution.
upvoted 0 times
...
Ty
26 days ago
I think we should develop an ETL process to merge the duplicates.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77