arrow_back

Getting Started with Cloud Data Fusion

Teste e compartilhe seu conhecimento com nossa comunidade.
done
Tenha acesso a mais de 700 laboratórios, selos de habilidade e cursos

Getting Started with Cloud Data Fusion

Laboratório 1 hora 30 minutos universal_currency_alt 5 créditos show_chart Introdutório
info Este laboratório pode incorporar ferramentas de IA para ajudar no seu aprendizado.
Teste e compartilhe seu conhecimento com nossa comunidade.
done
Tenha acesso a mais de 700 laboratórios, selos de habilidade e cursos

Google Cloud Self-Paced Labs logo

Overview

This lab teaches you how to create a Data Fusion instance and deploy a sample pipeline that's provided. The pipeline reads a JSON file containing NYT bestseller data from Cloud Storage. The pipeline then runs transformations on the file to parse and clean the data. And finally loads a subset of the records into BigQuery.

Objectives

In this lab you will learn to:

  • Create a Data Fusion instance
  • Deploy a sample pipeline that runs some transformations on a JSON file and filter out matching results into BigQuery

Setup

For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.

  1. Sign in to Qwiklabs using an incognito window.

  2. Note the lab's access time (for example, 1:15:00), and make sure you can finish within that time.
    There is no pause feature. You can restart if needed, but you have to start at the beginning.

  3. When ready, click Start lab.

  4. Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.

  5. Click Open Google Console.

  6. Click Use another account and copy/paste credentials for this lab into the prompts.
    If you use other credentials, you'll receive errors or incur charges.

  7. Accept the terms and skip the recovery resource page.

Log in to Google Cloud Console

  1. Using the browser tab or window you are using for this lab session, copy the Username from the Connection Details panel and click the Open Google Console button.
Note: If you are asked to choose an account, click Use another account.
  1. Paste in the Username, and then the Password as prompted.
  2. Click Next.
  3. Accept the terms and conditions.

Since this is a temporary account, which will last only as long as this lab:

  • Do not add recovery options
  • Do not sign up for free trials
  1. Once the console opens, view the list of services by clicking the Navigation menu (Navigation menu icon) at the top-left.

Navigation menu

Activate Cloud Shell

Cloud Shell is a virtual machine that contains development tools. It offers a persistent 5-GB home directory and runs on Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources. gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab completion.

  1. Click the Activate Cloud Shell button (Activate Cloud Shell icon) at the top right of the console.

  2. Click Continue.
    It takes a few moments to provision and connect to the environment. When you are connected, you are also authenticated, and the project is set to your PROJECT_ID.

Sample commands

  • List the active account name:
gcloud auth list

(Output)

Credentialed accounts: - <myaccount>@<mydomain>.com (active)

(Example output)

Credentialed accounts: - google1623327_student@qwiklabs.net
  • List the project ID:
gcloud config list project

(Output)

[core] project = <project_ID>

(Example output)

[core] project = qwiklabs-gcp-44776a13dea667a6 Note: Full documentation of gcloud is available in the gcloud CLI overview guide.

Check project permissions

Before you begin working on Google Cloud, you must ensure that your project has the correct permissions within Identity and Access Management (IAM).

  1. In the Google Cloud console, on the Navigation menu (Navigation menu icon), click IAM & Admin > IAM.

  2. Confirm that the default compute Service Account {project-number}-compute@developer.gserviceaccount.com is present and has the editor role assigned. The account prefix is the project number, which you can find on Navigation menu > Cloud overview.

Default compute service account

If the account is not present in IAM or does not have the editor role, follow the steps below to assign the required role.

  1. In the Google Cloud console, on the Navigation menu, click Cloud overview.

  2. From the Project info card, copy the Project number.

  3. On the Navigation menu, click IAM & Admin > IAM.

  4. At the top of the IAM page, click Add.

  5. For New principals, type:

{project-number}-compute@developer.gserviceaccount.com

Replace {project-number} with your project number.

  1. For Select a role, select Basic (or Project) > Editor.

  2. Click Save.

Task 1. Enable Cloud Data Fusion API

  1. In the Cloud console, from the Navigation menu, select APIs & Services > Library.

  2. In the search box, type fusion to find the Cloud Data Fusion API and click on the hyperlink.

  3. Click Enable if necessary.

Task 2. Create a Cloud Data Fusion instance

  1. In the Cloud console, from the Navigation menu select Data Fusion.

  2. Click the Create an Instance link at the top of the section to create a Cloud Data Fusion instance.

  3. In the Create Data Fusion instance page that loads:

a. Enter a name for your instance (like cdf-lab-instance). For region select us-central1.

b. Under Edition, select Basic

c. Click Grant Permission if required.

d. Click on the dropdown icon next to Advanced Options, under Logging and monitoring, check the checkbox for Enable Stackdriver logging service

e. Leave all other fields as-is, then click Create.

Click Check my progress to verify the objective. Create a cloud data fusion instance

Note: Creation of the instance takes around 10 minutes. While you're waiting, please watch this presentation on Cloud Data Fusion from Next '19 starting at the 15:31 time stamp. Come back and check on your instance every now and then, you can finish watching the video after the lab is completed.

Note: Remember, this lab has a time limit, and you will lose your work when the time runs out.

Next, you will grant permissions to the service account associated with the instance, using the following steps.

  1. Click on the instance name. On the Instance details page copy the Dataproc Service Account to your clipboard.

Service Account highlighted on the Instance details page

  1. In the Cloud console navigate to the IAM & Admin > IAM.

  2. On the IAM Permissions page, click +Grant Access.

  3. In the New principals field paste the service account.

  4. Click into the Select a role field and start typing Cloud Data Fusion API Service Agent, then select it.

  5. Click Save.

Click Check my progress to verify the objective. Add Cloud Data Fusion API Service Agent role to service account

Task 3. Navigate the Cloud Data Fusion UI

When using Cloud Data Fusion, you use both the Cloud console and the separate Cloud Data Fusion UI.

  • In the Cloud console, you can create and delete Cloud Data Fusion instances, and view Cloud Data Fusion instance details.

  • In the Cloud Data Fusion web UI, you can use the various pages, such as Pipeline Studio or Wrangler, to use Cloud Data Fusion functionality.

To navigate the Cloud Data Fusion UI, follow these steps:

  1. In the Cloud console return to Navigation menu > Data Fusion.

  2. Click the View Instance link next to your Data Fusion instance. Select your lab credentials to sign in, if required.

Highlighted View Instance link

  1. If prompted to take a tour of the service click on No, Thanks. You should now be in the Cloud Data Fusion UI.

  2. Notice that the Cloud Data Fusion web UI comes with its own navigation panel (on the left) to navigate to the page you need.

Task 4. Deploy a sample pipeline

Sample pipelines are available through the Cloud Data Fusion Hub, which allows you to share reusable Cloud Data Fusion pipelines, plugins, and solutions.

  1. In the Cloud Data Fusion web UI, click HUB on the top right.

Highlighted HUB link

  1. In the left panel, click Pipelines.

  2. Click the Cloud Data Fusion Quickstart pipeline, and then click Create on the popup that appears.

Highlighted Cloud Data Fusion Quickstart tile on the Pipelines page

  1. In the Cloud Data Fusion Quickstart configuration panel, click Finish.

  2. Click Customize Pipeline. A visual representation of your pipeline appears in the Pipeline Studio, which is a graphical interface for developing data integration pipelines. Available pipeline plugins are listed on the left, and your pipeline is displayed on the main canvas area. You can explore your pipeline by holding the pointer over each pipeline node and clicking the Properties button that appears. The Properties menu for each node allows you to view the objects and operations associated with the node.

Note: A node in a pipeline is an object that is connected in a sequence to produce a Directed Acyclic Graph. E.g. Source, Sink, Transform, Action, etc.

Pipeline Studio displaying a visial representation of the pipeline

  1. In the top right menu, click Deploy. This submits the pipeline to Cloud Data Fusion. You will execute the pipeline in the next section.

Deploy icon

Task 5. View your pipeline

The deployed pipeline appears in the pipeline details view, where you can do the following:

  • View the pipeline's structure and configuration.

  • Run the pipeline manually or set up a schedule or a trigger.

  • View a summary of the pipeline's historical runs, including execution times, logs, and metrics.

Pipeline details view

Task 6. Execute your pipeline

  1. In the pipeline details view, click on Run in the top center to execute your pipeline.
Note: When executing a pipeline, Cloud Data Fusion provisions an ephemeral Dataproc cluster, executes the pipeline on the cluster using Apache Hadoop MapReduce or Apache Spark, and then deletes the cluster. When the pipeline transitions to the Running state, you can monitor the Dataproc cluster creation and deletion. This cluster only exists for the duration of the pipeline.
  1. After a few minutes, the pipeline finishes. The pipeline Status changes to Succeeded and the number of records processed by each node is displayed.

Finished pipeline with success status and number of records processed by each node

Click Check my progress to verify the objective. Deploy and execute a sample pipeline

Task 7. View the results

The pipeline writes output into a BigQuery table. You can verify that using the following steps.

  1. Click to open this link to the BigQuery UI in Cloud console or right-click on the console tab and select Duplicate, then use the Navigation menu to select BigQuery.

  2. In the left pane, click your Project ID (it will start with qwiklabs).

  3. Under the GCPQuickstart dataset in your project, click the top_rated_inexpensive table, then run a simple query, such as:

SELECT * FROM `<YOUR-PROJECT-ID>.GCPQuickStart.top_rated_inexpensive` LIMIT 10 Note: Make sure to replace "YOUR-PROJECT-ID" with the appropriate value of your project-id to view a sample of the results.

Query results

Click Check my progress to verify the objective. View the result

Congratulations!

In this lab, you learned how to create a Data Fusion instance and deploy a sample pipeline that reads an input file from Cloud Storage, transforms and filters the data to output a subset of the data into BigQuery.

End your lab

When you have completed your lab, click End Lab. Qwiklabs removes the resources you’ve used and cleans the account for you.

You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.

The number of stars indicates the following:

  • 1 star = Very dissatisfied
  • 2 stars = Dissatisfied
  • 3 stars = Neutral
  • 4 stars = Satisfied
  • 5 stars = Very satisfied

You can close the dialog box if you don't want to provide feedback.

For feedback, suggestions, or corrections, please use the Support tab.

Manual Last Updated October 11, 2022

Lab Last Tested October 11, 2022

Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

Este conteúdo não está disponível no momento

Você vai receber uma notificação por e-mail quando ele estiver disponível

Ótimo!

Vamos entrar em contato por e-mail se ele ficar disponível