arrow_back

Configuring MongoDB Atlas with BigQuery Dataflow Templates

Sign in Join
Get access to 700+ labs and courses

Configuring MongoDB Atlas with BigQuery Dataflow Templates

Lab 1 hour 30 minutes universal_currency_alt 5 Credits show_chart Intermediate
info This lab may incorporate AI tools to support your learning.
Get access to 700+ labs and courses

This lab was developed with our partner, MongoDB. Your personal information may be shared with MongoDB, the lab sponsor, if you have opted in to receive product updates, announcements, and offers in your Account Profile

Note: This lab requires a partner account. Please follow the lab instructions to create your account before starting the lab.

GSP1105

Overview

In this lab you configure MongoDB Atlas, create a M0 Cluster, make the cluster accessible from the outside world, and create an admin user. You will use the Sample_mflix database and movies collection on MongoDB Atlas that consists of details on Movies like cast, release date etc. You then create a dataflow pipeline to load the data from MongoDB cluster to Google BigQuery. Thereafter, you query the data on BigQuery using Google Cloud console and create a ML model from the data.

Objectives

In this lab, you learn how to:

  • Create a BigQuery dataset
  • Create a Dataflow pipeline using Dataflow UI
  • Create a ML model from the BigQuery dataset

Prerequisites

Setup and requirements

Before you click the Start Lab button

Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.

This Qwiklabs hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.

What you need

To complete this lab, you need:

  • Access to a standard internet browser (Chrome browser recommended).
  • Time to complete the lab.

Note: If you already have your own personal Google Cloud account or project, do not use it for this lab.

Note: If you are using a Pixelbook, open an Incognito window to run this lab.

How to start your lab and sign in to the Google Cloud Console

  1. Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is a panel populated with the temporary credentials that you must use for this lab.

  2. Copy the username, and then click Open Google Console. The lab spins up resources, and then opens another tab that shows the Sign in page.

    Tip: Open the tabs in separate windows, side-by-side.

  3. In the Sign in page, paste the username that you copied from the Connection Details panel. Then copy and paste the password.

    Important: You must use the credentials from the Connection Details panel. Do not use your Qwiklabs credentials. If you have your own Google Cloud account, do not use it for this lab (avoids incurring charges).

  4. Click through the subsequent pages:

    • Accept the terms and conditions.
    • Do not add recovery options or two-factor authentication (because this is a temporary account).
    • Do not sign up for free trials.

After a few moments, the Cloud Console opens in this tab.

Task 1: Create a MongoDB Atlas database deployment and load sample data

  1. Log in to your MongoDB Atlas account.

  2. You can deploy only one free tier cluster per project. If you already have a free cluster, you will need to create a new project to deploy an additional free cluster. To do this, open the dropdown menu located in the top left corner, just below the Atlas logo, and click New Project. Then, enter a project name, click Next and then Create Project.

  3. Click Create in the Create a cluster section.

    • Select M0 as the cluster tier.
    • Change the name of the cluster to Sandbox.
    • Select Google Cloud and any region of your choice.
    • Check the Preload sample dataset checkbox.
    • Uncheck the Automate security setup checkbox.
    • And finally, click Create Deployment.
  4. You will be prompted to complete the security setup of your deployment.

    • Click Allow Access from Anywhere and then, Add IP Address. This will allow you to connect to the cluster from the Google Cloud Shell development environment. You should never do this in a production environment.
    • Fill the Username and Password fields with your desired credentials and click Create Database User.
    • Click Choose connection method, then select Drivers.
    • Wait for Atlas to complete provisioning your database deployment. This may take a few minutes.
    • Once provisioning is complete, copy the connection string.
    • Finally, close the dialog window.
Note: You should never allow access from everywhere in a production environment. Instead, use VPC peering or a private endpoint.
  1. Loading the sample dataset into your database may take a few minutes. Once complete, a Browse collections button will appear under your newly created cluster. Click it to explore the dataset. In this lab, you will configure a Dataflow pipeline to transfer the sample_mflix database from MongoDB to BigQuery.

Task 2. Create BigQuery table

Before you run the Dataflow pipeline, you need a BigQuery dataset to store the data. Open the Google Cloud Console and sign in with the lab credentials if you haven't already.

Create BigQuery dataset

  1. Open BigQuery from Google Cloud console, either by searching for BigQuery in the search bar on top or from the Navigation menu by navigating to BigQuery under the ANALYTICS task.

  2. In the Explorer panel, select the project where you want to create the dataset.

  3. Click on the three vertical dots next to your project ID under Explorer section, click View actions option and then click Create dataset.

  4. On the Create dataset page:

  • For Dataset ID, enter sample_mflix

  • For Location type, select Region and under region select for the dataset. After a dataset is created, the location can't be changed.

  • For Default table expiration, leave the selection empty.

  • Click Create dataset.

  • Verify the dataset is listed under your project name on BigQuery explorer.

  1. Expand the dataset options and click Create table.
  • Name the table movies and click Create table.
Create BigQuery dataset

Task 3. Dataflow Pipeline creation

Now, create a Dataflow pipeline from the Dataflow UI.

Run Dataflow pipeline for MongoDB to BigQuery

  1. To create a Dataflow pipeline with Dataflow, go to Dataflow Create job from the templates page.

  2. In the Job name field, enter a unique job name: mongodb-to-bigquery-batch.

  3. For Regional endpoint, select .

  4. From the Dataflow template drop-down menu, select the MongoDB to BigQuery template under Process Data in Bulk (batch).

    Note: Do not select MongoDB to BigQuery (CDC) under Process Data Continuously (stream).

  5. In Required parameter enter the following parameters:

  • MongoDB Connection URI

    • the MongoDB connection string you copied earlier. If you haven't copied it, navigate back to MongoDB Atlas and click the Connect button on your Atlas database. Then, click Drivers and copy the connection string.

    • Paste the URI in MongoDB Connection URI in your dataflow UI. Replace the <password> with the password you used when creating the cluster.

  • MongoDB Database: sample_mflix

    You are using the sample_mflix database from the sample dataset for this lab.

  • MongoDB Collection: movies

  • BigQuery output table:

    • Click the BROWSE button, select movies. Your output table should look like - :sample_mflix.movies
  • Leave NONE for user Option.

  1. Expand Optional Parameters and de-select Use default machine type. Choose E2 from the Series dropdown. Then, choose e2-standard-2 from the Machine type dropdown.

  2. Click Run Job.

View the Dataflow Job running on dataflow on the Cloud Console.

Once the job execution is complete, all nodes in the job graph will turn green.

For more details on the values passed refer to the MongoDB Dataflow Templates.

Note: This job takes 5-10 minutes. Check your progress once your job has fully completed. Create Dataflow Pipeline from MongoDB to BigQuery

Task 4. Build Model using BQML

  1. Once the pipeline runs successfully, open the BigQuery Explorer.

  2. Expand the project, click on the dataset sample_mflix, double click on the movies table.

  3. Run the below queries one at a time.

    a. Query the first 1000 rows from the table:

SELECT * FROM `{{{ project_0.project_id | "PROJECT_ID" }}}.sample_mflix.movies` LIMIT 1000;

b. Create a movies-unpacked table from parsed imported data:

# Parse source_data and create movies-unpacked CREATE OR REPLACE TABLE `{{{ project_0.project_id | "PROJECT_ID" }}}.sample_mflix.movies-unpacked` AS( SELECT timestamp, JSON_EXTRACT(source_data, "$.year") as year, JSON_EXTRACT(source_data, "$.cast") as movie_cast, SAFE_CAST(JSON_EXTRACT(source_data, "$.imdb.rating")AS FLOAT64) as imdb_rating, JSON_EXTRACT(source_data, "$.title") as title, JSON_EXTRACT(source_data, "$.imdb.votes") as imdb_votes, JSON_EXTRACT(source_data, "$.type") as type, JSON_EXTRACT(source_data, "$.rated") as rated, SAFE_CAST(JSON_EXTRACT(source_data, "$.tomatoes.viewer.rating") AS FLOAT64) as tomato_rating, JSON_EXTRACT(source_data, "$.tomatoes.viewer.numReviews") as tomato_numReviews, JSON_EXTRACT(source_data, "$.tomatoes.viewer.meter") as tomato_meter from `{{{ project_0.project_id | "PROJECT_ID" }}}.sample_mflix.movies`);

c. Query and visualize the table movies-unpacked:

# Query newly created table SELECT * FROM `{{{ project_0.project_id | "PROJECT_ID" }}}.sample_mflix.movies-unpacked` LIMIT 1000;

d. Create model samplemflix.rating using the KMEANS clustering algorithm:

# Clustering base on imdb rating and tomato rating CREATE or REPLACE MODEL `sample_mflix.ratings` OPTIONS( MODEL_TYPE='KMEANS', NUM_CLUSTERS=5, KMEANS_INIT_METHOD='KMEANS++', DISTANCE_TYPE = 'EUCLIDEAN' ) AS SELECT AVG(imdb_rating) as avg_imdb_rating, AVG(tomato_rating) as avg_tomato_rating,rated FROM `{{{ project_0.project_id | "PROJECT_ID" }}}.sample_mflix.movies-unpacked` group by rated;
  1. Finally, navigate to the Model created under your Model section as shown below.

Create the BQML model

Congratulations!

In this lab, you have provisioned your first MongoDB Atlas cluster, loaded a sample dataset and you have setup a BigQuery dataset. Also you have used Dataflow UI to build a MongoDB to BigQuery pipeline and perform clustering on data using the k-means clustering algorithm.

You can deploy this model in VertexAI to manage your data models and scale your data model, which is out of scope for this lab.

Next steps/Learn more

Get free $500 credits for MongoDB on Google Cloud Marketplace - Applicable only for new customers.

Google Cloud training and certification

...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.

Manual Last Updated July 18, 2024

Lab Last Tested July 18, 2024

Copyright 2024 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

Before you begin

  1. Labs create a Google Cloud project and resources for a fixed time
  2. Labs have a time limit and no pause feature. If you end the lab, you'll have to restart from the beginning.
  3. On the top left of your screen, click Start lab to begin

This content is not currently available

We will notify you via email when it becomes available

Great!

We will contact you via email if it becomes available

One lab at a time

Confirm to end all existing labs and start this one

Use private browsing to run the lab

Use an Incognito or private browser window to run this lab. This prevents any conflicts between your personal account and the Student account, which may cause extra charges incurred to your personal account.