Professional-Data-Engineer | The Secret Of Google Professional-Data-Engineer Testing Bible

Our pass rate is high to 98.9% and the similarity percentage between our Professional-Data-Engineer study guide and real exam is 90% based on our seven-year educating experience. Do you want achievements in the Google Professional-Data-Engineer exam in just one try? I am currently studying for the Google Professional-Data-Engineer exam. Latest Google Professional-Data-Engineer Test exam practice questions and answers, Try Google Professional-Data-Engineer Brain Dumps First.

Online Google Professional-Data-Engineer free dumps demo Below:

NEW QUESTION 1

You are deploying a new storage system for your mobile application, which is a media streaming service. You decide the best fit is Google Cloud Datastore. You have entities with multiple properties, some of which can take on multiple values. For example, in the entity ‘Movie’ the property ‘actors’ and the property ‘tags’ have multiple values but the property ‘date released’ does not. A typical query would ask for all movies with actor=<actorname> ordered by date_released or all movies with tag=Comedy ordered by date_released. How should you avoid a combinatorial explosion in the number of indexes?
Professional-Data-Engineer dumps exhibit

  • A. Option A
  • B. Option B.
  • C. Option C
  • D. Option D

Answer: A

NEW QUESTION 2

You are developing an application that uses a recommendation engine on Google Cloud. Your solution should display new videos to customers based on past views. Your solution needs to generate labels for the entities in videos that the customer has viewed. Your design must be able to provide very fast filtering suggestions based on data from other customer preferences on several TB of data. What should you do?

  • A. Build and train a complex classification model with Spark MLlib to generate labels and filter the results.Deploy the models using Cloud Datapro
  • B. Call the model from your application.
  • C. Build and train a classification model with Spark MLlib to generate label
  • D. Build and train a second classification model with Spark MLlib to filter results to match customer preference
  • E. Deploy theModels using Cloud Datapro
  • F. Call the models from your application.
  • G. Build an application that calls the Cloud Video Intelligence API to generate label
  • H. Store data in Cloud Bigtable, and filter the predicted labels to match the user’s viewing history to generate preferences.
  • I. Build an application that calls the Cloud Video Intelligence API to generate label
  • J. Store data in Cloud SQL, and join and filter the predicted labels to match the user’s viewing history to generate preferences.

Answer: C

NEW QUESTION 3

Suppose you have a dataset of images that are each labeled as to whether or not they contain a human face. To create a neural network that recognizes human faces in images using this labeled dataset, what approach would likely be the most effective?

  • A. Use K-means Clustering to detect faces in the pixels.
  • B. Use feature engineering to add features for eyes, noses, and mouths to the input data.
  • C. Use deep learning by creating a neural network with multiple hidden layers to automatically detect features of faces.
  • D. Build a neural network with an input layer of pixels, a hidden layer, and an output layer with two categories.

Answer: C

Explanation:
Traditional machine learning relies on shallow nets, composed of one input and one output layer, and at most one hidden layer in between. More than three layers (including input and output) qualifies as “deep” learning. So deep is a strictly defined, technical term that means more than one hidden layer.
In deep-learning networks, each layer of nodes trains on a distinct set of features based on the previous layer’s output. The further you advance into the neural net, the more complex the features your nodes can recognize, since they aggregate and recombine features from the
previous layer.
A neural network with only one hidden layer would be unable to automatically recognize high-level features of faces, such as eyes, because it wouldn't be able to "build" these features using previous hidden layers that detect low-level features, such as lines.
Feature engineering is difficult to perform on raw image data.
K- means Clustering is an unsupervised learning method used to categorize unlabeled data. Reference: https://deeplearning4j.org/neuralnet-overview

NEW QUESTION 4

You are planning to migrate your current on-premises Apache Hadoop deployment to the cloud. You need to ensure that the deployment is as fault-tolerant and cost-effective as possible for long-running batch jobs. You want to use a managed service. What should you do?

  • A. Deploy a Cloud Dataproc cluste
  • B. Use a standard persistent disk and 50% preemptible worker
  • C. Store data in Cloud Storage, and change references in scripts from hdfs:// to gs://
  • D. Deploy a Cloud Dataproc cluste
  • E. Use an SSD persistent disk and 50% preemptible worker
  • F. Store data in Cloud Storage, and change references in scripts from hdfs:// to gs://
  • G. Install Hadoop and Spark on a 10-node Compute Engine instance group with standard instance
  • H. Install the Cloud Storage connector, and store the data in Cloud Storag
  • I. Change references in scripts from hdfs:// to gs://
  • J. Install Hadoop and Spark on a 10-node Compute Engine instance group with preemptible instances.Store data in HDF
  • K. Change references in scripts from hdfs:// to gs://

Answer: A

NEW QUESTION 5

An external customer provides you with a daily dump of data from their database. The data flows into Google Cloud Storage GCS as comma-separated values (CSV) files. You want to analyze this data in Google BigQuery, but the data could have rows that are formatted incorrectly or corrupted. How should you build this pipeline?

  • A. Use federated data sources, and check data in the SQL query.
  • B. Enable BigQuery monitoring in Google Stackdriver and create an alert.
  • C. Import the data into BigQuery using the gcloud CLI and set max_bad_records to 0.
  • D. Run a Google Cloud Dataflow batch pipeline to import the data into BigQuery, and push errors to another dead-letter table for analysis.

Answer: D

NEW QUESTION 6

You need to set access to BigQuery for different departments within your company. Your solution should comply with the following requirements:
Professional-Data-Engineer dumps exhibit Each department should have access only to their data.
Professional-Data-Engineer dumps exhibit Each department will have one or more leads who need to be able to create and update tables and provide them to their team.
Professional-Data-Engineer dumps exhibit Each department has data analysts who need to be able to query but not modify data.
How should you set access to the data in BigQuery?

  • A. Create a dataset for each departmen
  • B. Assign the department leads the role of OWNER, and assign the data analysts the role of WRITER on their dataset.
  • C. Create a dataset for each departmen
  • D. Assign the department leads the role of WRITER, and assign the data analysts the role of READER on their dataset.
  • E. Create a table for each departmen
  • F. Assign the department leads the role of Owner, and assign the data analysts the role of Editor on the project the table is in.
  • G. Create a table for each departmen
  • H. Assign the department leads the role of Editor, and assign the data analysts the role of Viewer on the project the table is in.

Answer: D

NEW QUESTION 7

The YARN ResourceManager and the HDFS NameNode interfaces are available on a Cloud Dataproc cluster .

  • A. application node
  • B. conditional node
  • C. master node
  • D. worker node

Answer: C

Explanation:
The YARN ResourceManager and the HDFS NameNode interfaces are available on a Cloud Dataproc cluster master node. The cluster master-host-name is the name of your Cloud Dataproc cluster followed by an -m suffix—for example, if your cluster is named "my-cluster", the master-host-name would be "my-cluster-m".
Reference: https://cloud.google.com/dataproc/docs/concepts/cluster-web-interfaces#interfaces

NEW QUESTION 8

Does Dataflow process batch data pipelines or streaming data pipelines?

  • A. Only Batch Data Pipelines
  • B. Both Batch and Streaming Data Pipelines
  • C. Only Streaming Data Pipelines
  • D. None of the above

Answer: B

Explanation:
Dataflow is a unified processing model, and can execute both streaming and batch data pipelines Reference: https://cloud.google.com/dataflow/

NEW QUESTION 9

MJTelco needs you to create a schema in Google Bigtable that will allow for the historical analysis of the last 2 years of records. Each record that comes in is sent every 15 minutes, and contains a unique identifier of the device and a data record. The most common query is for all the data for a given device for a given day. Which schema should you use?

  • A. Rowkey: date#device_idColumn data: data_point
  • B. Rowkey: dateColumn data: device_id, data_point
  • C. Rowkey: device_idColumn data: date, data_point
  • D. Rowkey: data_pointColumn data: device_id, date
  • E. Rowkey: date#data_pointColumn data: device_id

Answer: D

NEW QUESTION 10

Which action can a Cloud Dataproc Viewer perform?

  • A. Submit a job.
  • B. Create a cluster.
  • C. Delete a cluster.
  • D. List the jobs.

Answer: D

Explanation:
A Cloud Dataproc Viewer is limited in its actions based on its role. A viewer can only list clusters, get cluster details, list jobs, get job details, list operations, and get operation details.
Reference: https://cloud.google.com/dataproc/docs/concepts/iam#iam_roles_and_cloud_dataproc_operations_summary

NEW QUESTION 11

Which of the following is NOT one of the three main types of triggers that Dataflow supports?

  • A. Trigger based on element size in bytes
  • B. Trigger that is a combination of other triggers
  • C. Trigger based on element count
  • D. Trigger based on time

Answer: A

Explanation:
There are three major kinds of triggers that Dataflow supports: 1. Time-based triggers 2. Data-driven triggers. You can set a trigger to emit results from a window when that window has received a certain number of data elements. 3. Composite triggers. These triggers combine multiple time-based or data-driven triggers in some logical way
Reference: https://cloud.google.com/dataflow/model/triggers

NEW QUESTION 12

You have an Apache Kafka Cluster on-prem with topics containing web application logs. You need to replicate the data to Google Cloud for analysis in BigQuery and Cloud Storage. The preferred replication method is mirroring to avoid deployment of Kafka Connect plugins.
What should you do?

  • A. Deploy a Kafka cluster on GCE VM Instance
  • B. Configure your on-prem cluster to mirror your topics tothe cluster running in GC
  • C. Use a Dataproc cluster or Dataflow job to read from Kafka and write to GCS.
  • D. Deploy a Kafka cluster on GCE VM Instances with the PubSub Kafka connector configured as a Sink connecto
  • E. Use a Dataproc cluster or Dataflow job to read from Kafka and write to GCS.
  • F. Deploy the PubSub Kafka connector to your on-prem Kafka cluster and configure PubSub as a Source connecto
  • G. Use a Dataflow job to read fron PubSub and write to GCS.
  • H. Deploy the PubSub Kafka connector to your on-prem Kafka cluster and configure PubSub as a Sink connecto
  • I. Use a Dataflow job to read fron PubSub and write to GCS.

Answer: A

NEW QUESTION 13

Data Analysts in your company have the Cloud IAM Owner role assigned to them in their projects to allow them to work with multiple GCP products in their projects. Your organization requires that all BigQuery data access logs be retained for 6 months. You need to ensure that only audit personnel in your company can access the data access logs for all projects. What should you do?

  • A. Enable data access logs in each Data Analyst’s projec
  • B. Restrict access to Stackdriver Logging via Cloud IAM roles.
  • C. Export the data access logs via a project-level export sink to a Cloud Storage bucket in the Data Analysts’ project
  • D. Restrict access to the Cloud Storage bucket.
  • E. Export the data access logs via a project-level export sink to a Cloud Storage bucket in a newly created projects for audit log
  • F. Restrict access to the project with the exported logs.
  • G. Export the data access logs via an aggregated export sink to a Cloud Storage bucket in a newly created project for audit log
  • H. Restrict access to the project that contains the exported logs.

Answer: D

NEW QUESTION 14

You operate an IoT pipeline built around Apache Kafka that normally receives around 5000 messages per second. You want to use Google Cloud Platform to create an alert as soon as the moving average over 1 hour drops below 4000 messages per second. What should you do?

  • A. Consume the stream of data in Cloud Dataflow using Kafka I
  • B. Set a sliding time window of 1 hour every 5 minute
  • C. Compute the average when the window closes, and send an alert if the average is less than 4000 messages.
  • D. Consume the stream of data in Cloud Dataflow using Kafka I
  • E. Set a fixed time window of 1 hour.Compute the average when the window closes, and send an alert if the average is less than 4000 messages.
  • F. Use Kafka Connect to link your Kafka message queue to Cloud Pub/Su
  • G. Use a Cloud Dataflow template to write your messages from Cloud Pub/Sub to Cloud Bigtabl
  • H. Use Cloud Scheduler to run a script every hour that counts the number of rows created in Cloud Bigtable in the last hou
  • I. If that number falls below 4000, send an alert.
  • J. Use Kafka Connect to link your Kafka message queue to Cloud Pub/Su
  • K. Use a Cloud Dataflow template to write your messages from Cloud Pub/Sub to BigQuer
  • L. Use Cloud Scheduler to run a script every five minutes that counts the number of rows created in BigQuery in the last hou
  • M. If that number falls below 4000, send an alert.

Answer: C

NEW QUESTION 15

Your team is working on a binary classification problem. You have trained a support vector machine (SVM) classifier with default parameters, and received an area under the Curve (AUC) of 0.87 on the validation set. You want to increase the AUC of the model. What should you do?

  • A. Perform hyperparameter tuning
  • B. Train a classifier with deep neural networks, because neural networks would always beat SVMs
  • C. Deploy the model and measure the real-world AUC; it’s always higher because of generalization
  • D. Scale predictions you get out of the model (tune a scaling factor as a hyperparameter) in order to get the highest AUC

Answer: D

NEW QUESTION 16

Your company maintains a hybrid deployment with GCP, where analytics are performed on your anonymized customer data. The data are imported to Cloud Storage from your data center through parallel uploads to a data transfer server running on GCP. Management informs you that the daily transfers take too long and have asked you to fix the problem. You want to maximize transfer speeds. Which action should you take?

  • A. Increase the CPU size on your server.
  • B. Increase the size of the Google Persistent Disk on your server.
  • C. Increase your network bandwidth from your datacenter to GCP.
  • D. Increase your network bandwidth from Compute Engine to Cloud Storage.

Answer: C

NEW QUESTION 17

You have several Spark jobs that run on a Cloud Dataproc cluster on a schedule. Some of the jobs run in sequence, and some of the jobs run concurrently. You need to automate this process. What should you do?

  • A. Create a Cloud Dataproc Workflow Template
  • B. Create an initialization action to execute the jobs
  • C. Create a Directed Acyclic Graph in Cloud Composer
  • D. Create a Bash script that uses the Cloud SDK to create a cluster, execute jobs, and then tear down the cluster

Answer: A

NEW QUESTION 18

You are selecting services to write and transform JSON messages from Cloud Pub/Sub to BigQuery for a data pipeline on Google Cloud. You want to minimize service costs. You also want to monitor and accommodate input data volume that will vary in size with minimal manual intervention. What should you do?

  • A. Use Cloud Dataproc to run your transformation
  • B. Monitor CPU utilization for the cluste
  • C. Resize the number of worker nodes in your cluster via the command line.
  • D. Use Cloud Dataproc to run your transformation
  • E. Use the diagnose command to generate an operational output archiv
  • F. Locate the bottleneck and adjust cluster resources.
  • G. Use Cloud Dataflow to run your transformation
  • H. Monitor the job system lag with Stackdrive
  • I. Use the default autoscaling setting for worker instances.
  • J. Use Cloud Dataflow to run your transformation
  • K. Monitor the total execution time for a sampling of job
  • L. Configure the job to use non-default Compute Engine machine types when needed.

Answer: B

NEW QUESTION 19

You have some data, which is shown in the graphic below. The two dimensions are X and Y, and the shade of each dot represents what class it is. You want to classify this data accurately using a linear algorithm.
Professional-Data-Engineer dumps exhibit
To do this you need to add a synthetic feature. What should the value of that feature be?

  • A. X^2+Y^2
  • B. X^2
  • C. Y^2
  • D. cos(X)

Answer: D

NEW QUESTION 20

You work for a bank. You have a labelled dataset that contains information on already granted loan application and whether these applications have been defaulted. You have been asked to train a model to predict default rates for credit applicants.
What should you do?

  • A. Increase the size of the dataset by collecting additional data.
  • B. Train a linear regression to predict a credit default risk score.
  • C. Remove the bias from the data and collect applications that have been declined loans.
  • D. Match loan applicants with their social profiles to enable feature engineering.

Answer: B

NEW QUESTION 21

You are operating a streaming Cloud Dataflow pipeline. Your engineers have a new version of the pipeline with a different windowing algorithm and triggering strategy. You want to update the running pipeline with the new version. You want to ensure that no data is lost during the update. What should you do?

  • A. Update the Cloud Dataflow pipeline inflight by passing the --update option with the --jobName set to the existing job name
  • B. Update the Cloud Dataflow pipeline inflight by passing the --update option with the --jobName set to a new unique job name
  • C. Stop the Cloud Dataflow pipeline with the Cancel optio
  • D. Create a new Cloud Dataflow job with the updated code
  • E. Stop the Cloud Dataflow pipeline with the Drain optio
  • F. Create a new Cloud Dataflow job with the updated code

Answer: A

NEW QUESTION 22

Which of these operations can you perform from the BigQuery Web UI?

  • A. Upload a file in SQL format.
  • B. Load data with nested and repeated fields.
  • C. Upload a 20 MB file.
  • D. Upload multiple files using a wildcard.

Answer: B

Explanation:
You can load data with nested and repeated fields using the Web UI. You cannot use the Web UI to:
- Upload a file greater than 10 MB in size
- Upload multiple files at the same time
- Upload a file in SQL format
All three of the above operations can be performed using the "bq" command. Reference: https://cloud.google.com/bigquery/loading-data

NEW QUESTION 23

Your company is loading comma-separated values (CSV) files into Google BigQuery. The data is fully imported successfully; however, the imported data is not matching byte-to-byte to the source file. What is the most likely cause of this problem?

  • A. The CSV data loaded in BigQuery is not flagged as CSV.
  • B. The CSV data has invalid rows that were skipped on import.
  • C. The CSV data loaded in BigQuery is not using BigQuery’s default encoding.
  • D. The CSV data has not gone through an ETL phase before loading into BigQuery.

Answer: B

NEW QUESTION 24

If a dataset contains rows with individual people and columns for year of birth, country, and income, how many of the columns are continuous and how many are categorical?

  • A. 1 continuous and 2 categorical
  • B. 3 categorical
  • C. 3 continuous
  • D. 2 continuous and 1 categorical

Answer: D

Explanation:
The columns can be grouped into two types—categorical and continuous columns:
A column is called categorical if its value can only be one of the categories in a finite set. For example, the native country of a person (U.S., India, Japan, etc.) or the education level (high school, college, etc.) are categorical columns.
A column is called continuous if its value can be any numerical value in a continuous range. For example, the capital gain of a person (e.g. $14,084) is a continuous column.
Year of birth and income are continuous columns. Country is a categorical column.
You could use bucketization to turn year of birth and/or income into categorical features, but the raw columns are continuous.
Reference: https://www.tensorflow.org/tutorials/wide#reading_the_census_data

NEW QUESTION 25
......

Recommend!! Get the Full Professional-Data-Engineer dumps in VCE and PDF From Certshared, Welcome to Download: https://www.certshared.com/exam/Professional-Data-Engineer/ (New 239 Q&As Version)