
Before you begin
- Labs create a Google Cloud project and resources for a fixed time
- Labs have a time limit and no pause feature. If you end the lab, you'll have to restart from the beginning.
- On the top left of your screen, click Start lab to begin
Configure the environment
/ 20
Deploy Pipeline
/ 10
In this lab, you learn how to utilize Vertex AI Pipelines to execute a simple Kubeflow Pipeline SDK derived ML Pipeline.
In this lab, you perform the following tasks:
This Qwiklabs hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is a panel populated with the temporary credentials that you must use for this lab.
Copy the username, and then click Open Google Console. The lab spins up resources, and then opens another tab that shows the Choose an account page.
On the Choose an account page, click Use Another Account. The Sign in page opens.
Paste the username that you copied from the Connection Details panel. Then copy and paste the password.
After a few moments, the Cloud console opens in this tab.
Before you begin your work on Google Cloud, you need to ensure that your project has the correct permissions within Identity and Access Management (IAM).
In the Google Cloud console, on the Navigation menu (), select IAM & Admin > IAM.
Confirm that the default compute Service Account {project-number}-compute@developer.gserviceaccount.com
is present and has the editor
role assigned. The account prefix is the project number, which you can find on Navigation menu > Cloud Overview > Dashboard.
editor
role, follow the steps below to assign the required role.729328892908
).{project-number}
with your project number.Vertex AI Pipelines run in a serverless framework whereby pre-compiled pipelines are deployed on-demand or on a schedule. In order to facilitate smooth execution some environment configuration is required.
For the seamless execution of Pipeline code in a Qwiklabs environment the Compute Service Account needs elevated privileges on Cloud Storage.
In the Google Cloud console, on the Navigation menu (), click IAM & Admin > IAM.
Click the pencil icon for default compute Service Account {project-number}-compute@developer.gserviceaccount.com
to assign the Storage Admin role.
On the slide-out window, click Add Another Role. Type Storage Admin in the search box. Select Storage Admin with Grants full control of buckets and objects from the results list.
Click Save to assign the role to the Compute Service Account.
Artifacts will be accessed on ingest and export as the Pipeline executes.
The Pipeline has already been created for you and simply requires a few minor adjustments to allow it to run in your Qwiklabs project.
Click Check my progress to verify the objective.
The Pipeline code is a compilation of two AI operations written in Python. The example is very simple but demonstrates how easy it is orchestrate ML procedures written in a variety of languages (TensorFlow, Python, Java, etc.) into an easy to deploy AI Pipeline. The lab example performs two operations, concatenation and reverse, on two string values.
The key sections of code in basic_pipeline.json are the deploymentSpec and command blocks. Below is the first command block, the job that concatenates the input strings. This is Kubeflow Pipeline SDK (kfp) code that is designated to be executed by the Python 3.7 engine. You will not change any code, the section is shown here for your reference:
Click Check my progress to verify the objective.
From the Console, open the Navigation menu (), under Artificial Intelligence click Vertex AI.
Click the blue Enable all recommended APIs.
Once the API is enabled, click Pipelines in the left menu.
Click Create Run on the top menu.
From Run detail, select Import from Cloud Storage and for Cloud Storage URL browse to the pipeline-input folder you created inside your project's cloud storage bucket. Select the basic_pipeline.json file.
Click Select.
For Region select
Leave the remaining default values, click Continue.
You may leave the default values for Runtime configuration. Notice that the Cloud Storage Output Directory is set to the bucket folder created in an earlier step. The Pipeline Parameters are pre-filled from the values in the basic_pipeline.json file but you have the option of changing those at runtime via this wizard.
Click Submit to start the Pipeline execution.
You will be returned to the Pipeline dashboard and your run will progress from Pending to Running to Succeeded. The entire run will take between 3 and 6 minutes.
Once the status reaches Succeeded, click on the run name to see the execution graph and details.
A graph element exists for each step. Click on the concat object to see the details for the job.
Click on the View Job button. A new tab will open with the Vertex AI Custom Job that was submitted to the backend to satisfy the pipeline request.
Feel free to explore more details on the Pipeline execution.
You have successfully used Vertex AI Pipelines to execute a simple Kubeflow Pipeline SDK derived ML Pipeline.
Manual Last Updated April 26, 2024
Lab Last Tested April 26, 2024
Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.
This content is not currently available
We will notify you via email when it becomes available
Great!
We will contact you via email if it becomes available
One lab at a time
Confirm to end all existing labs and start this one