Wczytuję…
Nie znaleziono wyników.

Google Cloud Skills Boost

Wykorzystuj swoje umiejętności w konsoli Google Cloud

Building Batch Data Pipelines on Google Cloud

Zyskaj dostęp do ponad 700 modułów i kursów

Serverless Data Analysis with Dataflow: A Simple Dataflow Pipeline (Java)

Moduł 1 godz. 30 godz. universal_currency_alt Punkty: 5 show_chart Zaawansowane
info Ten moduł może zawierać narzędzia AI, które ułatwią Ci naukę.
Zyskaj dostęp do ponad 700 modułów i kursów

Overview

In this lab, you will open a Dataflow project, use pipeline filtering, and execute the pipeline locally and on the cloud.

Objectives

In this lab, you will learn how to write a simple Dataflow pipeline and run it both locally and on the cloud.

  • Setup a Java Dataflow project using Maven
  • Write a simple pipeline in Java
  • Execute the query on the local machine
  • Execute the query on the cloud

Setup and requirements

For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.

  1. Sign in to Qwiklabs using an incognito window.

  2. Note the lab's access time (for example, 1:15:00), and make sure you can finish within that time.
    There is no pause feature. You can restart if needed, but you have to start at the beginning.

  3. When ready, click Start lab.

  4. Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.

  5. Click Open Google Console.

  6. Click Use another account and copy/paste credentials for this lab into the prompts.
    If you use other credentials, you'll receive errors or incur charges.

  7. Accept the terms and skip the recovery resource page.

Activate Google Cloud Shell

Google Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud.

Google Cloud Shell provides command-line access to your Google Cloud resources.

  1. In Cloud console, on the top right toolbar, click the Open Cloud Shell button.

  2. Click Continue.

It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

  • You can list the active account name with this command:
gcloud auth list

Output:

Credentialed accounts: - @.com (active)

Example output:

Credentialed accounts: - google1623327_student@qwiklabs.net
  • You can list the project ID with this command:
gcloud config list project

Output:

[core] project =

Example output:

[core] project = qwiklabs-gcp-44776a13dea667a6 Note: Full documentation of gcloud is available in the gcloud CLI overview guide .

Launch Google Cloud Shell Code Editor

Use the Google Cloud Shell Code Editor to easily create and edit directories and files in the Cloud Shell instance.

  • Once you activate the Google Cloud Shell, click Open editor to open the Cloud Shell Code Editor.

You now have three interfaces available:

  • The Cloud Shell Code Editor
  • Console (By clicking on the tab). You can switch back and forth between the Console and Cloud Shell by clicking on the tab.
  • The Cloud Shell Command Line (By clicking on Open Terminal in the Console)

Check project permissions

Before you begin your work on Google Cloud, you need to ensure that your project has the correct permissions within Identity and Access Management (IAM).

  1. In the Google Cloud console, on the Navigation menu (), select IAM & Admin > IAM.

  2. Confirm that the default compute Service Account {project-number}-compute@developer.gserviceaccount.com is present and has the editor role assigned. The account prefix is the project number, which you can find on Navigation menu > Cloud Overview > Dashboard.

Note: If the account is not present in IAM or does not have the editor role, follow the steps below to assign the required role.
  1. In the Google Cloud console, on the Navigation menu, click Cloud Overview > Dashboard.
  2. Copy the project number (e.g. 729328892908).
  3. On the Navigation menu, select IAM & Admin > IAM.
  4. At the top of the roles table, below View by Principals, click Grant Access.
  5. For New principals, type:
{project-number}-compute@developer.gserviceaccount.com
  1. Replace {project-number} with your project number.
  2. For Role, select Project (or Basic) > Editor.
  3. Click Save.

Task 1. Preparation

Verify Bucket and Download Lab Code

Specific steps must be completed to successfully execute this lab:

  1. Verify that you have a Cloud Storage bucket (one was created for you automatically when the lab environment started).

  2. On the Google Cloud Console title bar, click Activate Cloud Shell. If prompted, click Continue. Clone the lab code github repository using the following command:

git clone https://github.com/GoogleCloudPlatform/training-data-analyst
  1. In Cloud Shell enter the following to create two environment variables. One named "BUCKET" and one named "REGION". Verify that each exists with the echo command:
BUCKET="{{{project_0.project_id|Project ID}}}-bucket" echo $BUCKET REGION="{{{project_0.default_region|Region}}}" echo $REGION

Ensure that the Dataflow API is properly enabled

  1. In the Cloud Shell, run the following commands to ensure that the Dataflow API is enabled cleanly in your project.
gcloud services disable dataflow.googleapis.com --force gcloud services enable dataflow.googleapis.com

Task 2. Create a new Dataflow project

The goal of this lab is to become familiar with the structure of a Dataflow project and learn how to execute a Dataflow pipeline. You will use the powerful build tool Maven to create a new Dataflow project.

  1. Return to the browser tab containing Cloud Shell. In Cloud Shell navigate to the directory for this lab:
cd ~/training-data-analyst/courses/data_analysis/lab2
  1. Copy and paste the following Maven command:
mvn archetype:generate \ -DarchetypeArtifactId=google-cloud-dataflow-java-archetypes-starter \ -DarchetypeGroupId=com.google.cloud.dataflow \ -DgroupId=com.example.pipelinesrus.newidea \ -DartifactId=newidea \ -Dversion="[1.0.0,2.0.0]" \ -DinteractiveMode=false
  • What directory has been created?
  • What package has been created inside the src directory?
  1. Examine the Maven command that was used to create the lab code:
cat ~/training-data-analyst/courses/data_analysis/lab2/create_mvn.sh
  • What directory will get created?
  • What package will get created inside the src directory?

Task 3. Pipeline filtering

  1. In the Cloud Shell code editor navigate to the directory /training-data-analyst/courses/data_analysis/lab2.

  2. Then select the path javahelp/src/main/java/com/google/cloud/training/dataanalyst/javahelp/ and view the file Grep.java.

Alternatively, you could view the file with nano editor. Do not make any changes to the code.

cd ~/training-data-analyst/courses/data_analysis/lab2/javahelp/src/main/java/com/google/cloud/training/dataanalyst/javahelp/ nano Grep.java

Can you answer these questions about the file Grep.java?

  • What files are being read?
  • What is the search term?
  • Where does the output go?

There are three apply statements in the pipeline:

  • What does the first apply() do?
  • What does the second apply() do?
  • Where does its input come from?
  • What does it do with this input?
  • What does it write to its output?
  • Where does the output go to?
  • What does the third apply() do?

Task 4. Execute the pipeline locally

  1. In Cloud Shell, paste the following Maven command:
cd ~/training-data-analyst/courses/data_analysis/lab2 export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:$PATH cd ~/training-data-analyst/courses/data_analysis/lab2/javahelp mvn compile -e exec:java \ -Dexec.mainClass=com.google.cloud.training.dataanalyst.javahelp.Grep
  1. The output file will be output.txt. If the output is large enough, it will be sharded into separate parts with names like: output-00000-of-00001. If necessary, you can locate the correct file by examining the file's time:
ls -al /tmp
  1. Examine the output file:
cat /tmp/output*

Does the output seem logical?

Task 5. Execute the pipeline on the cloud

  1. Copy some Java files to the cloud:
gcloud storage cp ../javahelp/src/main/java/com/google/cloud/training/dataanalyst/javahelp/*.java gs://$BUCKET/javahelp
  1. In the Cloud Shell code editor navigate to the directory /training-data-analyst/courses/data_analysis/lab2/javahelp/src/main/java/com/google/cloud/training/dataanalyst/javahelp

  2. Edit Dataflow pipeline in the file Grep.java:

cd ~/training-data-analyst/courses/data_analysis/lab2/javahelp/src/main/java/com/google/cloud/training/dataanalyst/javahelp
  1. Replace Input and Output variables with your Bucket name. Replace the variables as follows:
String input = "gs://{{{project_0.project_id|Project ID}}}-bucket/javahelp/*.java"; String outputPrefix = "gs://{{{project_0.project_id|Project ID}}}-bucket/javahelp/output"; Note: Make sure that you changed the input and outputPrefix strings that are already present in the source code (do not copy-and-paste the entire line above without removing the initial strings because you will then end up with two of the same variables).

Example lines before:

String input = "src/main/java/com/google/cloud/training/dataanalyst/javahelp/*.java"; String outputPrefix = "/tmp/output"; Example lines after editing containing your project's bucket name: String input = "gs://qwiklabs-gcp-your-value-bucket/javahelp/*.java"; String outputPrefix = "gs://qwiklabs-gcp-your-value-bucket/javahelp/output"; 5. Examine the script to submit the Dataflow to the cloud: cd ~/training-data-analyst/courses/data_analysis/lab2/javahelp cat run_oncloud1.sh

What is the difference between this Maven command and the one to run locally?

  1. Submit the Dataflow job to the cloud:
bash run_oncloud1.sh $DEVSHELL_PROJECT_ID $BUCKET Grep $REGION

Because this is such a small job, running on the cloud will take significantly longer than running it locally (on the order of 2-3 minutes).

Example completion of command line:

[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 01:50 min [INFO] Finished at: 2018-02-06T15:11:23-05:00 [INFO] Final Memory: 39M/206M [INFO] ------------------------------------------------------------------------
  1. Return to the browser tab for Console. On the Navigation menu (), click Dataflow and click on your job to monitor progress.

Example:

  1. Wait for the job status to turn to Succeeded. At this point, your Cloud Shell will display a command-line prompt.

    Note: If Dataflow job fails the first time, then re-run the previous command to submit a fresh Dataflow job to the cloud
  2. Examine the output in the Cloud Storage bucket. On the Navigation menu (), click Cloud Storage > Buckets and click on your bucket.

  3. Click the javahelp directory. This job will generate the file output.txt. If the file is large enough it will be sharded into multiple parts with names like: output-0000x-of-000y. You can identify the most recent file by name or by the Last modified field. Click on the file to view it.

Alternatively, you could download the file in Cloud Shell and view it:

gcloud storage cp gs://$BUCKET/javahelp/output* . cat output*

End your lab

When you have completed your lab, click End Lab. Google Cloud Skills Boost removes the resources you’ve used and cleans the account for you.

You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.

The number of stars indicates the following:

  • 1 star = Very dissatisfied
  • 2 stars = Dissatisfied
  • 3 stars = Neutral
  • 4 stars = Satisfied
  • 5 stars = Very satisfied

You can close the dialog box if you don't want to provide feedback.

For feedback, suggestions, or corrections, please use the Support tab.

Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

Wstecz Dalej

Zanim zaczniesz

  1. Moduły tworzą projekt Google Cloud i zasoby na określony czas.
  2. Moduły mają ograniczenie czasowe i nie mają funkcji wstrzymywania. Jeśli zakończysz moduł, musisz go zacząć od początku.
  3. Aby rozpocząć, w lewym górnym rogu ekranu kliknij Rozpocznij moduł.

Ta treść jest obecnie niedostępna

Kiedy dostępność się zmieni, wyślemy Ci e-maila z powiadomieniem

Świetnie

Kiedy dostępność się zmieni, skontaktujemy się z Tobą e-mailem

Jeden moduł, a potem drugi

Potwierdź, aby zakończyć wszystkie istniejące moduły i rozpocząć ten

Aby uruchomić moduł, użyj przeglądania prywatnego

Uruchom ten moduł w oknie incognito lub przeglądania prywatnego. Dzięki temu unikniesz konfliktu między swoim kontem osobistym a kontem do nauki, co mogłoby spowodować naliczanie dodatkowych opłat na koncie osobistym.
Podgląd