Professional-Data-Engineer | What Tested Professional-Data-Engineer Exam Engine Is

Master the Professional-Data-Engineer Google Professional Data Engineer Exam content and be ready for exam day success quickly with this Ucertify Professional-Data-Engineer training. We guarantee it!We make it a reality and give you real Professional-Data-Engineer questions in our Google Professional-Data-Engineer braindumps.Latest 100% VALID Google Professional-Data-Engineer Exam Questions Dumps at below page. You can use our Google Professional-Data-Engineer braindumps and pass your exam.

Google Professional-Data-Engineer Free Dumps Questions Online, Read and Test Now.

NEW QUESTION 1

You operate a database that stores stock trades and an application that retrieves average stock price for a given company over an adjustable window of time. The data is stored in Cloud Bigtable where the datetime of the stock trade is the beginning of the row key. Your application has thousands of concurrent users, and you notice that performance is starting to degrade as more stocks are added. What should you do to improve the performance of your application?

  • A. Change the row key syntax in your Cloud Bigtable table to begin with the stock symbol.
  • B. Change the row key syntax in your Cloud Bigtable table to begin with a random number per second.
  • C. Change the data pipeline to use BigQuery for storing stock trades, and update your application.
  • D. Use Cloud Dataflow to write summary of each day’s stock trades to an Avro file on Cloud Storage.Update your application to read from Cloud Storage and Cloud Bigtable to compute the responses.

Answer: A

NEW QUESTION 2

Your company has hired a new data scientist who wants to perform complicated analyses across very large datasets stored in Google Cloud Storage and in a Cassandra cluster on Google Compute Engine. The scientist primarily wants to create labelled data sets for machine learning projects, along with some visualization tasks. She reports that her laptop is not powerful enough to perform her tasks and it is slowing her down. You want to help her perform her tasks. What should you do?

  • A. Run a local version of Jupiter on the laptop.
  • B. Grant the user access to Google Cloud Shell.
  • C. Host a visualization tool on a VM on Google Compute Engine.
  • D. Deploy Google Cloud Datalab to a virtual machine (VM) on Google Compute Engine.

Answer: B

NEW QUESTION 3

What Dataflow concept determines when a Window's contents should be output based on certain criteria being met?

  • A. Sessions
  • B. OutputCriteria
  • C. Windows
  • D. Triggers

Answer: D

Explanation:
Triggers control when the elements for a specific key and window are output. As elements arrive, they are put into one or more windows by a Window transform and its associated WindowFn, and then passed to the associated Trigger to determine if the Windows contents should be output.
Reference:
https://cloud.google.com/dataflow/java-sdk/JavaDoc/com/google/cloud/dataflow/sdk/transforms/windowing/Tri

NEW QUESTION 4

Which of these numbers are adjusted by a neural network as it learns from a training dataset (select 2 answers)?

  • A. Weights
  • B. Biases
  • C. Continuous features
  • D. Input values

Answer: AB

Explanation:
A neural network is a simple mechanism that’s implemented with basic math. The only difference between the traditional programming model and a neural network is that you let the computer determine the parameters (weights and bias) by learning from training datasets.
Reference:
https://cloud.google.com/blog/big-data/2016/07/understanding-neural-networks-with-tensorflow-playground

NEW QUESTION 5

How can you get a neural network to learn about relationships between categories in a categorical feature?

  • A. Create a multi-hot column
  • B. Create a one-hot column
  • C. Create a hash bucket
  • D. Create an embedding column

Answer: D

Explanation:
There are two problems with one-hot encoding. First, it has high dimensionality, meaning that instead of having just one value, like a continuous feature, it has many values, or dimensions. This makes computation more time-consuming, especially if a feature has a very large number of categories. The second problem is that it doesn’t encode any relationships between the categories. They are completely independent from each other, so the network has no way of knowing which ones are similar to each other.
Both of these problems can be solved by representing a categorical feature with an embedding
column. The idea is that each category has a smaller vector with, let’s say, 5 values in it. But unlike a one-hot vector, the values are not usually 0. The values are weights, similar to the weights that are used for basic features in a neural network. The difference is that each category has a set of weights (5 of them in this case).
You can think of each value in the embedding vector as a feature of the category. So, if two categories are very similar to each other, then their embedding vectors should be very similar too.
Reference:
https://cloudacademy.com/google/introduction-to-google-cloud-machine-learning-engine-course/a-wide-and-dee

NEW QUESTION 6

Which of these statements about exporting data from BigQuery is false?

  • A. To export more than 1 GB of data, you need to put a wildcard in the destination filename.
  • B. The only supported export destination is Google Cloud Storage.
  • C. Data can only be exported in JSON or Avro format.
  • D. The only compression option available is GZIP.

Answer: C

Explanation:
Data can be exported in CSV, JSON, or Avro format. If you are exporting nested or repeated data, then CSV format is not supported.
Reference: https://cloud.google.com/bigquery/docs/exporting-data

NEW QUESTION 7

What are all of the BigQuery operations that Google charges for?

  • A. Storage, queries, and streaming inserts
  • B. Storage, queries, and loading data from a file
  • C. Storage, queries, and exporting data
  • D. Queries and streaming inserts

Answer: A

Explanation:
Google charges for storage, queries, and streaming inserts. Loading data from a file and exporting data are free operations.
Reference: https://cloud.google.com/bigquery/pricing

NEW QUESTION 8

Which of the following statements about the Wide & Deep Learning model are true? (Select 2 answers.)

  • A. The wide model is used for memorization, while the deep model is used for generalization.
  • B. A good use for the wide and deep model is a recommender system.
  • C. The wide model is used for generalization, while the deep model is used for memorization.
  • D. A good use for the wide and deep model is a small-scale linear regression problem.

Answer: AB

Explanation:
Can we teach computers to learn like humans do, by combining the power of memorization and generalization? It's not an easy question to answer, but by jointly training a wide linear model (for memorization) alongside a deep neural network (for generalization), one can combine the strengths of both to bring us one step closer. At Google, we call it Wide & Deep Learning. It's useful for generic large-scale regression and classification problems with sparse inputs (categorical features with a large number of possible feature values), such as recommender systems, search, and ranking problems.
Reference: https://research.googleblog.com/2016/06/wide-deep-learning-better-together-with.html

NEW QUESTION 9

Which of the following job types are supported by Cloud Dataproc (select 3 answers)?

  • A. Hive
  • B. Pig
  • C. YARN
  • D. Spark

Answer: ABD

Explanation:
Cloud Dataproc provides out-of-the box and end-to-end support for many of the most popular job types, including Spark, Spark SQL, PySpark, MapReduce, Hive, and Pig jobs.
Reference: https://cloud.google.com/dataproc/docs/resources/faq#what_type_of_jobs_can_i_run

NEW QUESTION 10

You have a requirement to insert minute-resolution data from 50,000 sensors into a BigQuery table. You expect significant growth in data volume and need the data to be available within 1 minute of ingestion for real-time analysis of aggregated trends. What should you do?

  • A. Use bq load to load a batch of sensor data every 60 seconds.
  • B. Use a Cloud Dataflow pipeline to stream data into the BigQuery table.
  • C. Use the INSERT statement to insert a batch of data every 60 seconds.
  • D. Use the MERGE statement to apply updates in batch every 60 seconds.

Answer: C

NEW QUESTION 11

You’re training a model to predict housing prices based on an available dataset with real estate properties. Your plan is to train a fully connected neural net, and you’ve discovered that the dataset contains latitude and longtitude of the property. Real estate professionals have told you that the location of the property is highly influential on price, so you’d like to engineer a feature that incorporates this physical dependency.
What should you do?

  • A. Provide latitude and longtitude as input vectors to your neural net.
  • B. Create a numeric column from a feature cross of latitude and longtitude.
  • C. Create a feature cross of latitude and longtitude, bucketize at the minute level and use L1 regularization during optimization.
  • D. Create a feature cross of latitude and longtitude, bucketize it at the minute level and use L2 regularization during optimization.

Answer: B

Explanation:
Reference https://cloud.google.com/bigquery/docs/gis-data

NEW QUESTION 12

What is the general recommendation when designing your row keys for a Cloud Bigtable schema?

  • A. Include multiple time series values within the row key
  • B. Keep the row keep as an 8 bit integer
  • C. Keep your row key reasonably short
  • D. Keep your row key as long as the field permits

Answer: C

Explanation:
A general guide is to, keep your row keys reasonably short. Long row keys take up additional memory and storage and increase the time it takes to get responses from the Cloud Bigtable server.
Reference: https://cloud.google.com/bigtable/docs/schema-design#row-keys

NEW QUESTION 13

You work for a shipping company that has distribution centers where packages move on delivery lines to route them properly. The company wants to add cameras to the delivery lines to detect and track any visual damage to the packages in transit. You need to create a way to automate the detection of damaged packages and flag them for human review in real time while the packages are in transit. Which solution should you choose?

  • A. Use BigQuery machine learning to be able to train the model at scale, so you can analyze the packages in batches.
  • B. Train an AutoML model on your corpus of images, and build an API around that model to integrate with the package tracking applications.
  • C. Use the Cloud Vision API to detect for damage, and raise an alert through Cloud Function
  • D. Integrate the package tracking applications with this function.
  • E. Use TensorFlow to create a model that is trained on your corpus of image
  • F. Create a Python notebook in Cloud Datalab that uses this model so you can analyze for damaged packages.

Answer: A

NEW QUESTION 14

You have developed three data processing jobs. One executes a Cloud Dataflow pipeline that transforms data uploaded to Cloud Storage and writes results to BigQuery. The second ingests data from on-premises servers and uploads it to Cloud Storage. The third is a Cloud Dataflow pipeline that gets information from third-party data providers and uploads the information to Cloud Storage. You need to be able to schedule and monitor the execution of these three workflows and manually execute them when needed. What should you do?

  • A. Create a Direct Acyclic Graph in Cloud Composer to schedule and monitor the jobs.
  • B. Use Stackdriver Monitoring and set up an alert with a Webhook notification to trigger the jobs.
  • C. Develop an App Engine application to schedule and request the status of the jobs using GCP API calls.
  • D. Set up cron jobs in a Compute Engine instance to schedule and monitor the pipelines using GCP API calls.

Answer: D

NEW QUESTION 15

You are designing storage for 20 TB of text files as part of deploying a data pipeline on Google Cloud. Your input data is in CSV format. You want to minimize the cost of querying aggregate values for multiple users who will query the data in Cloud Storage with multiple engines. Which storage service and schema design should you use?

  • A. Use Cloud Bigtable for storag
  • B. Install the HBase shell on a Compute Engine instance to query the Cloud Bigtable data.
  • C. Use Cloud Bigtable for storag
  • D. Link as permanent tables in BigQuery for query.
  • E. Use Cloud Storage for storag
  • F. Link as permanent tables in BigQuery for query.
  • G. Use Cloud Storage for storag
  • H. Link as temporary tables in BigQuery for query.

Answer: A

NEW QUESTION 16

You have a data pipeline with a Cloud Dataflow job that aggregates and writes time series metrics to Cloud Bigtable. This data feeds a dashboard used by thousands of users across the organization. You need to support additional concurrent users and reduce the amount of time required to write the data. Which two actions should you take? (Choose two.)

  • A. Configure your Cloud Dataflow pipeline to use local execution
  • B. Increase the maximum number of Cloud Dataflow workers by setting maxNumWorkers in PipelineOptions
  • C. Increase the number of nodes in the Cloud Bigtable cluster
  • D. Modify your Cloud Dataflow pipeline to use the Flatten transform before writing to Cloud Bigtable
  • E. Modify your Cloud Dataflow pipeline to use the CoGroupByKey transform before writing to Cloud Bigtable

Answer: DE

NEW QUESTION 17

You want to analyze hundreds of thousands of social media posts daily at the lowest cost and with the fewest steps.
You have the following requirements:
Professional-Data-Engineer dumps exhibit You will batch-load the posts once per day and run them through the Cloud Natural Language API.
Professional-Data-Engineer dumps exhibit You will extract topics and sentiment from the posts.
Professional-Data-Engineer dumps exhibit You must store the raw posts for archiving and reprocessing.
Professional-Data-Engineer dumps exhibit You will create dashboards to be shared with people both inside and outside your organization.
You need to store both the data extracted from the API to perform analysis as well as the raw social media posts for historical archiving. What should you do?

  • A. Store the social media posts and the data extracted from the API in BigQuery.
  • B. Store the social media posts and the data extracted from the API in Cloud SQL.
  • C. Store the raw social media posts in Cloud Storage, and write the data extracted from the API into BigQuery.
  • D. Feed to social media posts into the API directly from the source, and write the extracted data from the API into BigQuery.

Answer: D

NEW QUESTION 18

Your company uses a proprietary system to send inventory data every 6 hours to a data ingestion service in the cloud. Transmitted data includes a payload of several fields and the timestamp of the transmission. If there are any concerns about a transmission, the system re-transmits the data. How should you deduplicate the data most efficiency?

  • A. Assign global unique identifiers (GUID) to each data entry.
  • B. Compute the hash value of each data entry, and compare it with all historical data.
  • C. Store each data entry as the primary key in a separate database and apply an index.
  • D. Maintain a database table to store the hash value and other metadata for each data entry.

Answer: D

NEW QUESTION 19

You need to choose a database for a new project that has the following requirements:
Professional-Data-Engineer dumps exhibit Fully managed
Professional-Data-Engineer dumps exhibit Able to automatically scale up
Professional-Data-Engineer dumps exhibit Transactionally consistent
Professional-Data-Engineer dumps exhibit Able to scale up to 6 TB
Professional-Data-Engineer dumps exhibit Able to be queried using SQL Which database do you choose?

  • A. Cloud SQL
  • B. Cloud Bigtable
  • C. Cloud Spanner
  • D. Cloud Datastore

Answer: C

NEW QUESTION 20

You are working on a niche product in the image recognition domain. Your team has developed a model that is dominated by custom C++ TensorFlow ops your team has implemented. These ops are used inside your main training loop and are performing bulky matrix multiplications. It currently takes up to several days to train a model. You want to decrease this time significantly and keep the cost low by using an accelerator on Google Cloud. What should you do?

  • A. Use Cloud TPUs without any additional adjustment to your code.
  • B. Use Cloud TPUs after implementing GPU kernel support for your customs ops.
  • C. Use Cloud GPUs after implementing GPU kernel support for your customs ops.
  • D. Stay on CPUs, and increase the size of the cluster you’re training your model on.

Answer: B

NEW QUESTION 21

You want to use Google Stackdriver Logging to monitor Google BigQuery usage. You need an instant notification to be sent to your monitoring tool when new data is appended to a certain table using an insert job, but you do not want to receive notifications for other tables. What should you do?

  • A. Make a call to the Stackdriver API to list all logs, and apply an advanced filter.
  • B. In the Stackdriver logging admin interface, and enable a log sink export to BigQuery.
  • C. In the Stackdriver logging admin interface, enable a log sink export to Google Cloud Pub/Sub, and subscribe to the topic from your monitoring tool.
  • D. Using the Stackdriver API, create a project sink with advanced log filter to export to Pub/Sub, and subscribe to the topic from your monitoring tool.

Answer: B

NEW QUESTION 22

The Dataflow SDKs have been recently transitioned into which Apache service?

  • A. Apache Spark
  • B. Apache Hadoop
  • C. Apache Kafka
  • D. Apache Beam

Answer: D

Explanation:
Dataflow SDKs are being transitioned to Apache Beam, as per the latest Google directive Reference: https://cloud.google.com/dataflow/docs/

NEW QUESTION 23

You use a dataset in BigQuery for analysis. You want to provide third-party companies with access to the same dataset. You need to keep the costs of data sharing low and ensure that the data is current. Which solution should you choose?

  • A. Create an authorized view on the BigQuery table to control data access, and provide third-party companies with access to that view.
  • B. Use Cloud Scheduler to export the data on a regular basis to Cloud Storage, and provide third-party companies with access to the bucket.
  • C. Create a separate dataset in BigQuery that contains the relevant data to share, and provide third-party companies with access to the new dataset.
  • D. Create a Cloud Dataflow job that reads the data in frequent time intervals, and writes it to the relevant BigQuery dataset or Cloud Storage bucket for third-party companies to use.

Answer: B

NEW QUESTION 24

You create a new report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. It is company policy to ensure employees can view only the data associated with their region, so you create and populate a table for each region. You need to enforce the regional access policy to the data.
Which two actions should you take? (Choose two.)

  • A. Ensure all the tables are included in global dataset.
  • B. Ensure each table is included in a dataset for a region.
  • C. Adjust the settings for each table to allow a related region-based security group view access.
  • D. Adjust the settings for each view to allow a related region-based security group view access.
  • E. Adjust the settings for each dataset to allow a related region-based security group view access.

Answer: BD

NEW QUESTION 25
......

P.S. Thedumpscentre.com now are offering 100% pass ensure Professional-Data-Engineer dumps! All Professional-Data-Engineer exam questions have been updated with correct answers: https://www.thedumpscentre.com/Professional-Data-Engineer-dumps/ (239 New Questions)