arrow_back

Serverless Data Processing with Dataflow - Writing an ETL pipeline using Apache Beam and Dataflow (Java)

Sign in Join
Test and share your knowledge with our community!
done
Get access to over 700 hands-on labs, skill badges, and courses

Serverless Data Processing with Dataflow - Writing an ETL pipeline using Apache Beam and Dataflow (Java)

Lab 1 hour 30 minutes universal_currency_alt 5 Credits show_chart Intermediate
info This lab may incorporate AI tools to support your learning.
Test and share your knowledge with our community!
done
Get access to over 700 hands-on labs, skill badges, and courses

Overview

In this lab, you:

  • Build a batch Extract-Transform-Load pipeline in Apache Beam, which takes raw data from Google Cloud Storage and writes it to BigQuery.
  • Run the Apache Beam pipeline on Dataflow.
  • Parameterize the execution of the pipeline.

Prerequisites:

  • Basic familiarity with Java.

Setup and requirements

For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.

  1. Sign in to Qwiklabs using an incognito window.

  2. Note the lab's access time (for example, 1:15:00), and make sure you can finish within that time.
    There is no pause feature. You can restart if needed, but you have to start at the beginning.

  3. When ready, click Start lab.

  4. Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.

  5. Click Open Google Console.

  6. Click Use another account and copy/paste credentials for this lab into the prompts.
    If you use other credentials, you'll receive errors or incur charges.

  7. Accept the terms and skip the recovery resource page.

Activate Google Cloud Shell

Google Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud.

Google Cloud Shell provides command-line access to your Google Cloud resources.

  1. In Cloud console, on the top right toolbar, click the Open Cloud Shell button.

    Highlighted Cloud Shell icon

  2. Click Continue.

It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:

Project ID highlighted in the Cloud Shell Terminal

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

  • You can list the active account name with this command:
gcloud auth list

Output:

Credentialed accounts: - @.com (active)

Example output:

Credentialed accounts: - google1623327_student@qwiklabs.net
  • You can list the project ID with this command:
gcloud config list project

Output:

[core] project =

Example output:

[core] project = qwiklabs-gcp-44776a13dea667a6 Note: Full documentation of gcloud is available in the gcloud CLI overview guide .

Check project permissions

Before you begin your work on Google Cloud, you need to ensure that your project has the correct permissions within Identity and Access Management (IAM).

  1. In the Google Cloud console, on the Navigation menu (Navigation menu icon), select IAM & Admin > IAM.

  2. Confirm that the default compute Service Account {project-number}-compute@developer.gserviceaccount.com is present and has the editor role assigned. The account prefix is the project number, which you can find on Navigation menu > Cloud Overview > Dashboard.

Compute Engine default service account name and editor status highlighted on the Permissions tabbed page

Note: If the account is not present in IAM or does not have the editor role, follow the steps below to assign the required role.
  1. In the Google Cloud console, on the Navigation menu, click Cloud Overview > Dashboard.
  2. Copy the project number (e.g. 729328892908).
  3. On the Navigation menu, select IAM & Admin > IAM.
  4. At the top of the roles table, below View by Principals, click Grant Access.
  5. For New principals, type:
{project-number}-compute@developer.gserviceaccount.com
  1. Replace {project-number} with your project number.
  2. For Role, select Project (or Basic) > Editor.
  3. Click Save.

Setting up your IDE

For the purposes of this lab, you will mainly be using a Theia Web IDE hosted on Google Compute Engine. It has the lab repo pre-cloned. There is java langauge server support, as well as a terminal for programmatic access to Google Cloud APIs via the gcloud command line tool, similar to Cloud Shell.

  1. To access your Theia IDE, copy and paste the link shown in Google Cloud Skills Boost to a new tab.
Note: You may need to wait 3-5 minutes for the environment to be fully provisioned, even after the url appears. Until then you will see an error in the browser.

Credentials pane displaying the ide_url

The lab repo has been cloned to your environment. Each lab is divided into a labs folder with code to be completed by you, and a solution folder with a fully workable example to reference if you get stuck.

  1. Click on the File Explorer button to look:

Expanded File Explorer menu with the labs folder highlighted

You can also create multiple terminals in this environment, just as you would with cloud shell:

New Terminal option highlighted in the Terminal menu

You can see with by running gcloud auth list on the terminal that you're logged in as a provided service account, which has the exact same permissions are your lab user account:

Terminal dislaying the gcloud auth list command

If at any point your environment stops working, you can try resetting the VM hosting your IDE from the GCE console like this:

Both the Reset button and VM instance name highlighted on the VM instances page

Apache Beam and Dataflow

About 5 minutes

Dataflow is a fully-managed Google Cloud service for running batch and streaming Apache Beam data processing pipelines.

Apache Beam is an open source, advanced, unified and portable data processing programming model that allows end users to define both batch and streaming data-parallel processing pipelines using Java, Python, or Go. Apache Beam pipelines can be executed on your local development machine on small datasets, and at scale on Dataflow. However, because Apache Beam is open source, other runners exist — you can run Beam pipelines on Apache Flink and Apache Spark, among others.

dBeam model architecture diagram.

Lab part 1. Writing an ETL pipeline from scratch

Introduction

In this section, you write an Apache Beam Extract-Transform-Load (ETL) pipeline from scratch.

Dataset and use case review

For each lab in this quest, the input data is intended to resemble web server logs in Common Log format along with other data that a web server might contain. For this first lab, the data is treated as a batch source; in later labs, the data will be treated as a streaming source. Your task is to read the data, parse it, and then write it to BigQuery, a serverless data warehouse, for later data analysis..

Open the appropriate lab

  • Create a new terminal in your IDE environment, if you haven't already, and copy and paste the following command:
# Change directory into the lab cd 1_Basic_ETL/labs export BASE_DIR=$(pwd)

Modifying the pom.xml file

Before you can begin editing the actual pipeline code, you need to add in necessary dependencies.

  1. Add the following dependencies to your pom.xml file, which is located in 1_Basic_ETL/labs , inside the dependencies tag:
<dependency> <groupId>org.apache.beam</groupId> <artifactId>beam-sdks-java-core</artifactId> <version>${beam.version}</version> </dependency> <dependency> <groupId>org.apache.beam</groupId> <artifactId>beam-runners-direct-java</artifactId> <version>${beam.version}</version> </dependency> <dependency> <groupId>org.apache.beam</groupId> <artifactId>beam-sdks-java-io-google-cloud-platform</artifactId> <version>${beam.version}</version> </dependency> <dependency> <groupId>org.apache.beam</groupId> <artifactId>beam-runners-google-cloud-dataflow-java</artifactId> <version>${beam.version}</version> </dependency> <dependency> <groupId>org.apache.beam</groupId> <artifactId>beam-sdks-java-extensions-google-cloud-platform-core</artifactId> <version>${beam.version}</version> </dependency>
  1. A <beam.version> tag has already been added in the pom.xml to indicate which version of Beam to install.

  2. Save the file.

  3. Lastly, download these dependencies for use in your pipeline:

# Download dependencies listed in pom.xml mvn clean dependency:resolve

Write your first pipeline

1 hour

Task 1. Generate synthetic data

  1. Run the following command in the shell to clone a repository containing scripts for generating synthetic web server logs:
# Change to the directory containing the relevant code cd $BASE_DIR/../.. # Create GCS buckets and BQ dataset source create_batch_sinks.sh # Run a script to generate a batch of web server log events bash generate_batch_events.sh # Examine some sample events head events.json

The script creates a file called events.json containing lines resembling the following:

{"user_id": "-6434255326544341291", "ip": "192.175.49.116", "timestamp": "2019-06-19T16:06:45.118306Z", "http_request": "\"GET eucharya.html HTTP/1.0\"", "lat": 37.751, "lng": -97.822, "http_response": 200, "user_agent": "Mozilla/5.0 (compatible; MSIE 7.0; Windows NT 5.01; Trident/5.1)", "num_bytes": 182}

It then automatically copies this file to your Google Cloud Storage bucket at gs://<YOUR-PROJECT-ID>/events.json.

  1. Navigate to Google Cloud Storage and confirm that your storage bucket contains a file called events.json.

Click Check my progress to verify the objective. Generate synthetic data

Task 2. Read data from your source

If you get stuck in this or later sections, refer to the solution.

  1. Open up MyPipeline.java in your IDE, which can be found in 1_Basic_ETL/labs/src/main/java/com/mypackage/pipeline. Make sure the following packages are imported:
import com.google.gson.Gson; import org.apache.beam.runners.dataflow.options.DataflowPipelineOptions; import org.apache.beam.sdk.Pipeline; import org.apache.beam.sdk.PipelineResult; import org.apache.beam.sdk.io.TextIO; import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO; import org.apache.beam.sdk.options.PipelineOptionsFactory; import org.apache.beam.sdk.schemas.JavaFieldSchema; import org.apache.beam.sdk.schemas.annotations.DefaultSchema; import org.apache.beam.sdk.transforms.DoFn; import org.apache.beam.sdk.transforms.ParDo; import org.slf4j.Logger; import org.slf4j.LoggerFactory;
  1. Scroll down to the run() method. This method currently contains a pipeline that doesn’t do anything; note how a Pipeline object is created using a PipelineOptions object and the final line of the method runs the pipeline.
Pipeline pipeline = Pipeline.create(options); // Do stuff pipeline.run();

All data in Apache Beam pipelines reside in PCollections. To create your pipeline’s initial PCollection, apply a root transform to your pipeline object. A root transform creates a PCollection from either an external data source or some local data you specify.

There are two kinds of root transforms in the Beam SDKs: Read and Create. Read transforms read data from an external source, such as a text file or a database table. Create transforms create a PCollection from an in-memory java.util.Collection and are especially useful for testing.

The following example code shows how to apply a TextIO.Read root transform to read data from a text file. The transform is applied to a Pipeline object p, and returns a pipeline data set in the form of a PCollection<String>. "ReadLines" is your name for the transform, which will be helpful later when working with larger pipelines:

PCollection<String> lines = pipeline.apply("ReadLines", TextIO.read().from("gs://path/to/input.txt"));
  1. Inside the run() method, create a string constant called “input” and set its value to gs://<YOUR-PROJECT-ID>/events.json. In a future lab, you will use command-line parameters to pass this information.

  2. Create a PCollection of strings of all the events in events.json by calling the TextIO.read() transform.

  3. Add any appropriate import statements to the top of MyPipeline.java, in this case import org.apache.beam.sdk.values.PCollection;

Task 3. Run your pipeline to verify that it works

  • Return to the terminal and return to $BASE_DIR folder and execute the mvn compile exec:java command:
cd $BASE_DIR # Set up environment variables export MAIN_CLASS_NAME=com.mypackage.pipeline.MyPipeline mvn compile exec:java \ -Dexec.mainClass=${MAIN_CLASS_NAME} Note: In Case build fails, run the command: mvn clean install

At the moment, your pipeline doesn’t actually do anything; it simply reads in data. However, running it demonstrates a useful workflow, in which you verify the pipeline locally and cheaply using DirectRunner running on your local machine before doing more expensive computations. To run the pipeline using Dataflow, you may change runner to DataflowRunner.

Task 4. Adding in a transformation

If you get stuck, refer to the solution.

Transforms are what change your data. In Apache Beam, transforms are done by the PTransform class. At runtime, these operations will be performed on a number of independent workers. The input and output to every PTransform is a PCollection. In fact, though you may not have realized it, you have already used a PTransform when you read in data from Google Cloud Storage. Whether or not you assigned it to a variable, this created a PCollection of strings.

Because Beam uses a generic apply method for PCollections, you can chain transforms sequentially. For example, you can chain transforms to create a sequential pipeline, like this one:

[Final Output PCollection] = [Initial Input PCollection].apply([First Transform]) .apply([Second Transform]) .apply([Third Transform]);

For this task, use a new sort of transform: a ParDo. ParDo is a Beam transform for generic parallel processing. The ParDo processing paradigm is similar to the “Map” phase of a Map/Shuffle/Reduce-style algorithm: a ParDo transform considers each element in the input PCollection, performs some processing function (your user code) on that element, and emits zero, one, or multiple elements to an output PCollection.

ParDo is useful for a variety of common data processing operations, including:

  • Filtering a data set. You can use ParDo to consider each element in a PCollection and either output that element to a new collection, or discard it.
  • Formatting or type-converting each element in a data set. If your input PCollection contains elements that are of a different type or format than you want, you can use ParDo to perform a conversion on each element and output the result to a new PCollection.
  • Extracting parts of each element in a data set. If you have a PCollection of records with multiple fields, for example, you can use a ParDo to parse out just the fields you want to consider into a new PCollection.
  • Performing computations on each element in a data set. You can use ParDo to perform simple or complex computations on every element, or certain elements, of a PCollection and output the results as a new PCollection.
  1. To complete this task, write a ParDo transform that reads in a JSON string representing a single event, parses it using Gson, and outputs the custom object returned by Gson.

ParDo functions can be implemented in two ways, either inline or as a static class. Write inline ParDo functions like this:

pCollection.apply(ParDo.of(new DoFn<T1, T2>() { @ProcessElement public void processElement(@Element T1 i, OutputReceiver<T2> r) { // Do something r.output(0); } }));

Alternatively, they can be implemented as static classes that extend DoFn. This allows them to be more easily integrated with testing frameworks:

static class MyDoFn extends DoFn<T1, T2> { @ProcessElement public void processElement(@Element T1 json, OutputReceiver<T2> r) throws Exception { // Do something r.output(0); } }

And then, within the pipeline itself:

[Initial Input PCollection].apply(ParDo.of(new MyDoFn());
  1. In order to use Gson, you will need to create an inner class inside of MyPipeline. To take advantage of Beam Schemas, add the @DefaultSchema annotation. More on that later. Here’s an example of how to use Gson:
// Elsewhere @DefaultSchema(JavaFieldSchema.class) class MyClass { int field1; String field2; } // Within the DoFn Gson gson = new Gson(); MyClass myClass = gson.fromJson(jsonString, MyClass.class);
  1. Name your inner class CommmonLog. To construct this inner class with the right state variables, refer to the sample JSON above: the class should have one state variable for every key in the incoming JSON, and this variable should agree in type and name with the value and key.

  2. Use String for the "timestamp" for now, Long for "INTEGER" (BigQuery Integer is INT64), Double for "FLOAT" (BigQuery FLOAT is FLOAT64), and match the following BigQuery schema:

CommonLog Schema tabbed page, which includes the log information such as user_id, timestamp, and num_bytes.

Remember, if you get stuck, refer to the solution.

Task 5. Writing to a sink

At this point, your pipeline reads a file from Google Cloud Storage, parses each line, and emits a CommonLog for each element. The next step is to write these CommonLog objects into a BigQuery table.

While you can instruct your pipeline to create a BigQuery table if needed, you will need to create the dataset ahead of time. This has already been done by the generate_batch_events.sh script.

You can examine the dataset:

# Examine dataset bq ls # No tables yet bq ls logs

To output your pipeline’s final PCollections, you apply a Write transform to that PCollection. Write transforms can output the elements of a PCollection to an external data sink, such as a database table. You can use Write to output a PCollection at any time in your pipeline, although you’ll typically write out data at the end of your pipeline.

The following example code shows how to apply a TextIO.Write transform to write a PCollection of String to a text file:

PCollection<String> pCollection = ...; pCollection.apply("WriteMyFile", TextIO.write().to("gs://path/to/output"));
  1. In this case, instead of using TextIO.write(), use BigQueryIO.write().

This function requires a number of things to be specified, including the specific table to write to, the schema of this table. You can optionally specify whether to append to an existing table, recreate existing tables (helpful in early pipeline iteration), or create the table if it doesn't exist. By default, this transform will create tables that don't exist and won't write to a non-empty table.

Since the addition of Beam Schemas to the SDK, you can instruct the transform to infer the table schema from the object passed to it using .useBeamSchema() and by marking the input type. Alternatively, you can explicitly provide the schema with .withSchema() but would need to build a BigQuery TableSchema object to pass. Because you annotated the CommonLog class with @DefaultSchema(JavaFieldSchema.class), each transform is aware of the names and types of the fields in the object, including BigQueryIO.write().

  1. Examine the various alternatives in the 'Writing' section of BigQueryIO. In this case, since you annotated your CommonLog object, utilize .useBeamSchema(), and target <YOUR-PROJECT-ID>:logs.logs table like this:
pCollection.apply(BigQueryIO.<MyObject>write() .to("my-project:output_dataset.output_table") .useBeamSchema() .withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE) ); Note: WRITE_TRUNCATE will delete and recreate your table each and every time. This is helpful in early pipeline iteration, especially as you are iterating on your schema, but can easily cause unintended issues in production. WRITE_APPEND or WRITE_EMPTY are safer.

The set of all available types in Beam Schemas can be found in the Schema. FieldType documentation. All the possible BigQuery datatypes in Standard SQL that can be used can be found in the setType documentation and if you're curious, inspect the Beam Schema to BigQuery conversion.

Task 6. Run your pipeline

Return to the terminal, change the value of the RUNNER environment variable to DataflowRunner and run your pipeline using the same command as earlier. Once it has started, navigate to the Dataflow product page and note the arrangement of your pipeline. If you gave your transforms names, they will be displayed. Clicking on each one will reveal in real time the number of elements being processed each second.

The overall shape should be a single path, starting with the Read transform and ending with the Write transform. As your pipeline runs, workers will be added automatically, as the service determines the needs of your pipeline, and then disappear when they are no longer needed. You can observe this by navigating to Compute Engine, where you should see virtual machines created by the Dataflow service.

Note: If your pipeline is building successfully, but you're seeing a lot of errors due to code or misconfiguration in the Dataflow service, you can set RUNNER back to 'DirectRunner' to run it locally and receive faster feedback. This approach works in this case because the dataset is small and you are not using any features that aren't supported by DirectRunner. # Set up environment variables export PROJECT_ID=$(gcloud config get-value project) export REGION='{{{project_0.default_region|Region}}}' export PIPELINE_FOLDER=gs://${PROJECT_ID} export MAIN_CLASS_NAME=com.mypackage.pipeline.MyPipeline export RUNNER=DataflowRunner cd $BASE_DIR mvn compile exec:java \ -Dexec.mainClass=${MAIN_CLASS_NAME} \ -Dexec.cleanupDaemonThreads=false \ -Dexec.args=" \ --project=${PROJECT_ID} \ --region=${REGION} \ --stagingLocation=${PIPELINE_FOLDER}/staging \ --tempLocation=${PIPELINE_FOLDER}/temp \ --runner=${RUNNER}"
  • Once your pipeline has finished, return to the BigQuery browser window and query your table.

If your code isn’t performing as expected and you don’t know what to do, check out the solution.

Click Check my progress to verify the objective. Run your pipeline

Lab part 2. Parameterizing basic ETL

Approximately 20 minutes

Much of the work of data engineers is either predictable, like recurring jobs, or it’s similar to other work. However, the process for running pipelines requires engineering expertise. Think back to the steps that you just completed:

  1. You created a development environment and developed a pipeline. The environment included the Apache Beam SDK and other dependencies.
  2. You executed the pipeline from the development environment. The Apache Beam SDK staged files in Cloud Storage, created a job request file, and submitted the file to the Dataflow service.

It would be much better if were a way to initiate a job through an API call or without having to set up a development environment (which non-technical users would be unable to do). This would allow you also to pipelines to run.

Dataflow Templates seek to solve this problem by changing the representation that is created when a pipeline is compiled so that it is parameterizable. Unfortunately, it is not as simple as exposing command-line parameters, although that is something you do in a later lab. With Dataflow Templates, the workflow above becomes:

  1. Developers create a development environment and develop their pipeline. The environment includes the Apache Beam SDK and other dependencies.
  2. Developers execute the pipeline and create a template. The Apache Beam SDK stages files in Cloud Storage, creates a template file (similar to job request), and saves the template file in Cloud Storage.
  3. Non-developer users or other workflow tools like Airflow can easily execute jobs with the Google Cloud console, gcloud command-line tool, or the REST API to submit template file execution requests to the Dataflow service.

In this lab, you will practice using one of the many Google-created Dataflow Templates to accomplish the same task as the pipeline that you built in Part 1.

Task 1. Creating a JSON schema file

While you didn't have to pass a TableSchema object to the BigQueryIO.writeTableRows() transform since you utilized .usedBeamSchema() , you must pass the Dataflow Template a JSON file representing the schema in this example.

  1. Open up the terminal and navigate back to the main directory. Run the following command to grab the schema from your existing logs.logs table:
cd $BASE_DIR/../.. bq show --schema --format=prettyjson logs.logs
  1. Now, capture this output in a file and upload to GCS. The extra sed commands are to build a full JSON object that DataFlow will expect.
bq show --schema --format=prettyjson logs.logs | sed '1s/^/{"BigQuery Schema":/' | sed '$s/$/}/' > schema.json cat schema.json export PROJECT_ID=$(gcloud config get-value project) gsutil cp schema.json gs://${PROJECT_ID}/

Click Check my progress to verify the objective. Creating a JSON schema file

Task 2. Writing a JavaScript user-defined function

The Cloud Storage to BigQuery Dataflow Template requires a JavaScript function to convert the raw text into valid JSON. In this case, each line of text is valid JSON, so the function is somewhat trivial.

To complete this task, use the IDE to create a .js file with the contents below and then copy it to Google Cloud Storage.

  1. Copy the function below to its own transform.js file in the 1_Basic_ETL/ folder:
function transform(line) { return line; }
  1. Then run the following to copy the file to Google Cloud Storage:
cd 1_Basic_ETL/ export PROJECT_ID=$(gcloud config get-value project) gsutil cp *.js gs://${PROJECT_ID}/

Click Check my progress to verify the objective. Writing a JavaScript User-Defined Function

Task 3. Running a Dataflow Template

  1. Go to the Dataflow Web UI.

  2. Click CREATE JOB FROM TEMPLATE.

  3. Enter a job name for your Dataflow job.

  4. For Regional endpoint select region.

  5. Under Dataflow template, select the Text Files on Cloud Storage to BigQuery template under the Process Data in Bulk (batch) section - NOT the Streaming section.

  6. For Cloud Storage input path, enter the path to events.json in the form gs://<YOUR-PROJECT-ID>/events.json

  7. For Cloud Storage location of your BigQuery schema file, write the path to your schema.json file, in the form gs://<YOUR-PROJECT-ID>/schema.json

  8. For BigQuery output table, enter <myprojectid>:logs.logs

  9. For Temporary BigQuery directory, enter a new folder within this same bucket. The job will create it for you.

  10. For Temporary location, enter a second new folder within this same bucket.

  11. Leave Encryption at Google-managed key.

  12. Click to open Optional Prameters.

  13. For JavaScript UDF path in Cloud Storage, enter in the path to your .js , in the form gs://<YOUR-PROJECT-ID>/transform.js

  14. For Javascript UDF name, enter transform

  15. Click the Run job button.

While your job is running, you may inspect it from within the Dataflow Web UI.

Click Check my progress to verify the objective. Running a Dataflow Template

Task 4. Inspect the Dataflow Template code

  1. Recall the code for the Dataflow Template you just used.

  2. Scroll down to the main method. The code should look familiar to the pipeline you authored!

  • It begins with a Pipeline object, created using a PipelineOptions object.
  • It consists of a chain of PTransforms, beginning with a TextIO.read() transform.
  • The PTransform after the read transform is a bit different; it allows one to use Javascript to transform the input strings if, for example, the source format doesn’t align well with the BigQuery table format; for documentation on how to use this feature, see this page.
  • The PTransform after the Javascript UDF uses a library function to convert the JSON into a tablerow; you can inspect that code here.
  • The write PTransform looks a bit different because instead of making use of a schema that is known at graph compile-time, the code is intended to accept parameters that will only be known at run-time. The NestedValueProvider class is what makes this possible. It also is using an explicitally set schema, vs one inferred from Beam schema using .useBeamSchema() as you did.
  1. Make sure to check out the next lab, which will cover making pipelines that are not simply chains of PTransforms, and how you can adapt a pipeline you’ve built to be a custom Dataflow Template.

End your lab

When you have completed your lab, click End Lab. Google Cloud Skills Boost removes the resources you’ve used and cleans the account for you.

You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.

The number of stars indicates the following:

  • 1 star = Very dissatisfied
  • 2 stars = Dissatisfied
  • 3 stars = Neutral
  • 4 stars = Satisfied
  • 5 stars = Very satisfied

You can close the dialog box if you don't want to provide feedback.

For feedback, suggestions, or corrections, please use the Support tab.

Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

This content is not currently available

We will notify you via email, when it becomes available

Great!

We will contact you via email, if it becomes available