Cyber Monday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Data Engineer Topic 3 Question 98 Discussion

Actual exam question for Google's Professional Data Engineer exam
Question #: 98
Topic #: 3
[All Professional Data Engineer Questions]

You are migrating your on-premises data warehouse to BigQuery. As part of the migration, you want to facilitate cross-team collaboration to get the most value out of the organization's dat

a. You need to design an architecture that would allow teams within the organization to securely publish, discover, and subscribe to read-only data in a self-service manner. You need to minimize costs while also maximizing data freshness What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: C

To provide a cost-effective storage and processing solution that allows data scientists to explore data similarly to using the on-premises HDFS cluster with SQL on the Hive query engine, deploying a Dataproc cluster is the best choice. Here's why:

Compatibility with Hive:

Dataproc is a fully managed Apache Spark and Hadoop service that provides native support for Hive, making it easy for data scientists to run SQL queries on the data as they would in an on-premises Hadoop environment.

This ensures that the transition to Google Cloud is smooth, with minimal changes required in the workflow.

Cost-Effective Storage:

Storing the ORC files in Cloud Storage is cost-effective and scalable, providing a reliable and durable storage solution that integrates seamlessly with Dataproc.

Cloud Storage allows you to store large datasets at a lower cost compared to other storage options.

Hive Integration:

Dataproc supports running Hive directly, which is essential for data scientists familiar with SQL on the Hive query engine.

This setup enables the use of existing Hive queries and scripts without significant modifications.

Steps to Implement:

Copy ORC Files to Cloud Storage:

Transfer the ORC files from the on-premises HDFS cluster to Cloud Storage, ensuring they are organized in a similar directory structure.

Deploy Dataproc Cluster:

Set up a Dataproc cluster configured to run Hive. Ensure that the cluster has access to the ORC files stored in Cloud Storage.

Configure Hive:

Configure Hive on Dataproc to read from the ORC files in Cloud Storage. This can be done by setting up external tables in Hive that point to the Cloud Storage location.

Provide Access to Data Scientists:

Grant the data scientist team access to the Dataproc cluster and the necessary permissions to interact with the Hive tables.


Dataproc Documentation

Hive on Dataproc

Google Cloud Storage Documentation

Contribute your Thoughts:

Oliva
18 days ago
Option D is the way to go. Analytics Hub is the data-sharing equivalent of a one-stop-shop. It's like having a personal shopper for your data needs!
upvoted 0 times
...
Vallie
22 days ago
Option D all the way! Analytics Hub is the way to go. It's like having a big data party where everyone's invited, but the bouncers (security policies) make sure only the right people get in.
upvoted 0 times
Anglea
5 days ago
Option D all the way! Analytics Hub is the way to go. It's like having a big data party where everyone's invited, but the bouncers (security policies) make sure only the right people get in.
upvoted 0 times
...
...
Adria
28 days ago
I'd go with Option D as well. It seems like the most efficient and cost-effective solution to enable cross-team collaboration while ensuring data freshness.
upvoted 0 times
Jacquelyne
11 days ago
A) Create authorized datasets to publish shared data in the subscribing team's project.
upvoted 0 times
...
...
Moon
1 months ago
I disagree, I believe option B is more efficient. It gives each team control over their own data sharing.
upvoted 0 times
...
Luisa
1 months ago
I think option A is the best choice. It allows for secure data sharing and minimizes costs.
upvoted 0 times
...
Tennie
2 months ago
Option D seems like the best choice here. Analytics Hub is designed specifically for secure data sharing, and it allows teams to discover and subscribe to data in a self-service way.
upvoted 0 times
Lilli
15 days ago
I think using Analytics Hub would definitely streamline the process and make it easier for teams to work together.
upvoted 0 times
...
Melissa
21 days ago
Creating authorized datasets in each team's project could get messy. Analytics Hub seems like a cleaner option.
upvoted 0 times
...
Micheal
23 days ago
Agreed, Analytics Hub sounds like the most efficient solution for facilitating cross-team collaboration in data sharing.
upvoted 0 times
...
Lashon
24 days ago
I agree, Analytics Hub sounds like the most efficient solution for cross-team collaboration.
upvoted 0 times
...
Joanna
28 days ago
I think using Analytics Hub would definitely streamline the process of sharing data across teams.
upvoted 0 times
...
Adaline
1 months ago
Option D seems like the best choice here. Analytics Hub is designed specifically for secure data sharing, and it allows teams to discover and subscribe to data in a self-service way.
upvoted 0 times
...
Paris
1 months ago
Option D seems like the best choice here. Analytics Hub is designed specifically for secure data sharing, and it allows teams to discover and subscribe to data in a self-service way.
upvoted 0 times
...
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77