
Before you begin
- Labs create a Google Cloud project and resources for a fixed time
- Labs have a time limit and no pause feature. If you end the lab, you'll have to restart from the beginning.
- On the top left of your screen, click Start lab to begin
Enable the Vertex AI API
/ 10
Create a Vertex AI Search app
/ 30
Build and deploy your app on Cloud Run
/ 25
Make changes to the website's content
/ 35
Generative AI is a technology that can be used to create content such as text, video, images, and code. Google Cloud offers a variety of large language models (LLMs) and tools to help you get started with GenAI, such as Gemini and Vertex AI. You can use LLM-powered tools to create and enhance content for your websites, and add conversational search experiences. You can also promote web page discovery and enhance website navigation.
In this lab, you implement a website modernization solution to:
For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.
Sign in to Qwiklabs using an incognito window.
Note the lab's access time (for example, 1:15:00
), and make sure you can finish within that time.
There is no pause feature. You can restart if needed, but you have to start at the beginning.
When ready, click Start lab.
Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.
Click Open Google Console.
Click Use another account and copy/paste credentials for this lab into the prompts.
If you use other credentials, you'll receive errors or incur charges.
Accept the terms and skip the recovery resource page.
Cloud Shell is a virtual machine that contains development tools. It offers a persistent 5-GB home directory and runs on Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources. gcloud
is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab completion.
Click the Activate Cloud Shell button () at the top right of the console.
Click Continue.
It takes a few moments to provision and connect to the environment. When you are connected, you are also authenticated, and the project is set to your PROJECT_ID.
(Output)
(Example output)
(Output)
(Example output)
Before you can use Vertex AI, you must enable the Vertex AI API.
To enable the API, run the command in Cloud Shell:
If prompted, click Authorize.
To verify the objective, click Check my progress.
This lab uses a pre-built website app that consists of backend APIs built using FastAPI, and a frontend that is built using html, css, and Javascript. In this task, you download the website code and review the code and file structure.
In Cloud Shell, download the website code archive from Cloud Storage:
Extract the contents of the archive:
To view the website code and file structure, in the Cloud Shell menu bar, click Open Editor.
In the navigation bar of the Cloud Shell Editor, expand the genai-website-mod-app
folder.
This folder contains all the files needed to build and deploy the website app. Here is an overview of the files and their purpose:
File/Folder | Description |
---|---|
Dockerfile | Dockerfile to build website application container using Cloud Run |
config.toml | Configure application with variables |
main.py | Main FastAPI entrypoint to website application |
models/ | Folder containing the date models used by the website application |
routers/ | FastAPI API routers for different application functionality |
static/ | Contains static website assets, such as css, images and JS files |
templates/ | Jinja templates for the website application pages |
utils/ | Utility modules for the website application |
views/ | View implementations of the website application |
In this task, you implement search capability for your website by creating a search application in Vertex AI to search unstructured data such as blog posts.
In the Google Cloud console, click the Navigation menu (), and then select AI Applications.
Click Continue and activate the API.
If you are automatically redirected to the Create App page, go to the next step. Otherwise, click New App.
On the Create App page, under Search for your website, click Create.
On the Configuration page, configure a generic search app according to these settings, leaving the remaining settings as their defaults:
Property | Value (type or select) |
---|---|
Your app name | my-search-app |
External name of your company or organization | my-company |
Location of your app | global (Global) |
Click Continue.
Click Create Data Store.
On this page, you configure your search app with your own data source to be used in your website search results.
Select Cloud Storage.
With Folder selected as the default, click Browse.
To view the contents of the Cloud Storage bucket
Select the blog_posts
folder, and then click Select.
The gs:// URI to the folder is populated.
Click CONTINUE.
For the data store name, type my-data-store.
Click Create.
Click CREATE to create a search application.
To navigate to the data page for the app, click Data in the AI Applications navigation menu.
The details of your app's data store are displayed.
AI Applications now starts ingesting the blog post HTML data from your Cloud Storage bucket for your search app.
To view the status of the data ingestion, on the Data page, click the Activity tab.
The Status column indicates the current status. Once the import process is completed, the column will indicate Import completed
.
To verify that the documents were imported successfully, click the Documents tab.
You can preview the search app by testing its functionality in AI Applications.
In the AI Applications navigation menu, click Preview.
In the search box, type What is dollar cost averaging and how can it help me? and press Enter.
The app generates a response explaining dollar cost averaging
and provides excerpts and links to the relevant files that were imported from Cloud Storage.
To verify the objective, click Check my progress.
With the search app created, you can now integrate the app with your website or application. This lab uses the search API to make calls and receive responses which are displayed on the site. You can also embed a search widget into your website that automatically provides a search bar and an expandable search interface. To learn more about this option, follow the links at the end of the lab to view the documentation.
In this task, you configure the website code to integrate with the search app that you created in the previous task. You then deploy the website application to Cloud Run for testing.
The website application is built using FastAPI, which is a web framework for building APIs in Python. The genai-website-mod-app/routers
folder contains the router API implementations for various website functionalities such as search
.
In the Cloud Shell Editor, navigate to the genai-website-mod-app/routers
folder, and open the file vertex_search.py
.
This file contains the code that implements the search API calls using the discoveryengine
module from the cloud client SDK for Python. The code also uses tomllib
, a module in Python that parses configuration files.
View the code in the function trigger_first_search()
.
This function sets up the call to the Discovery Engine API using the path projects/{project_id}/locations/{datastore_location}/collections/default_collection/dataStores/{datastore_id}
, that contains path parameters.
To provide values for the path parameters, in the genai-website-mod-app
folder, edit the file config.toml
.
In the [global]
section, replace the values of configuration properties as indicated:
Section | Property | Value |
---|---|---|
global | project_id | |
global | location | |
global | datastore_id | See next step |
Replace the value of the datastore_id
configuration property with the value of your search app's datastore ID:
a. To obtain the value of datastore_id
, navigate to AI Applications in the Google Cloud console, and select Data.
b. Copy and paste the Data store ID value of my-data-store
in the config.toml
file.
Replace additional configuration properties in the relevant config sections as indicated:
Section | Property | Value |
---|---|---|
imagen | bucket_name | |
blog | image_bucket | |
blog | blog_bucket |
Save your changes to the file.
Cloud Run is a managed compute platform that lets you run application containers on top of Google's scalable infrastructure.
In this task, you build the website application and deploy it to Cloud Run. You also test the search functionality that you integrated into the website.
Make sure you are in the website application
directory:
Set environment variables for the project ID, region, and website application service:
To build and deploy your app to Cloud Run, run the command:
To create the Artifact Registry Docker repository, type Y
After the service is deployed, a URL to the service is generated in the command output.
To test your app on Cloud Run, navigate to the website application's Cloud Run service URL in a separate browser tab or window.
In the search box, type What is dollar cost averaging and how can it help me? and press Enter.
Verify that the search results are returned and displayed on the website.
Ask a follow-up question by typing Can you use dollar cost averaging with ETFs? in the search box and press Enter.
Verify that the search results include an answer to the follow-up question, along with relevant links to the blog posts.
To verify the objective, click Check my progress.
Google's generative AI tools can be used to create and edit website copy or content. In this task, as a website content editor you use these tools to update the text and image content on the website used in this lab.
The website application uses the Imagen API in Vertex AI to generate and update images.
In the Cloud Shell Editor, open the config.toml
configuration file.
Review the configuration properties in the [imagen]
section. This section defines properties for the model to use for image generation and captioning, along with some additional properties.
In the Cloud Shell Editor, open the file routers/vertex_imagen.py
.
The different image captioning and modification functions, along with their API routes, are defined in this file.
The functions are implemented in the utils/imagen.py
file. Open this file in the Cloud Shell Editor.
View the generate_image()
function:
This function uses the vertexai.preview.vision_models.ImageGenerationModel class from the vertexai package in the Python SDK.
This function first loads the image generation model and then generates an image by invoking the generate_images()
function on the model passing in your text prompt and other parameters.
Let's update an image on one of the blog posts on the website.
In the top-right corner of your Cymbal Investments website, click All Blogs.
A page with six blog posts is displayed on the website.
Click the link to view the first blog post: Unleashing the Techie Within: A Journey to FIRE.
The blog post page contains a header, image, and paragraphs of text.
To edit the page contents, in the bottom right, click Edit ().
Hover over the image. Then, to the left of the image, click Click to tune ().
From the Click to tune menu, select Generate.
Under the image caption, for Prompt, type An image of a retired man and woman sitting on a beach enjoying the sunset. Click Generate.
Scroll to the top of the page, and wait for the image to be generated.
A new image is generated, which is then uploaded to your Cloud Storage images bucket. The blog post page is updated with this new image.
You can translate entire web pages or just pieces of inline text using text generation models. In this subtask, you translate inline text on the website blog page.
In the Cloud Shell Editor, open the file routers/vertex_llm.py
.
The different web page editing and translation functions along with their API routes are defined in this file.
Scroll to the bottom of the source file and view the code for the ai_translate_inline()
function.
This function builds a prompt using the ai_translate_inline_prompt
configuration property, the user selected text, and the user specified target language. It then invokes the llm_generate_gemini()
function to generate a response from the model.
Here is the value of the ai_translate_inline_prompt
configuration property from the config.toml
file:
To view the llm_generate_gemini()
function, open the file utils/vertex_llm_utils.py
.
This function uses the vertexai.generative_models.GenerativeModel class from the vertexai package in the Python SDK.
This function first loads the gemini-2.0-flash
model, and then generates a response by invoking the generate_content()
function on the model passing in your text prompt and other parameters.
Make sure you are in the edit mode of your website blog page. If not, in the bottom right, click Edit ().
Select any of the text paragraphs, and then click the Translate tool:
In the language prompt field, type French, and then click Send.
After a few seconds, the paragraph text is translated in the language you specified, and replaced inline on the page.
You can also use a generative model to refine your website text content.
In the file routers/vertex_llm.py
, view the ai_refine_text()
function.
This function builds a prompt using the ai_refine_prompt
configuration property, the user-selected text from the website content, and user input instructions.
Here is the value of the ai_refine_prompt
configuration property from the config.toml
file:
Make sure you are in the edit mode of your website blog page. If not, in the bottom right, click Edit ().
Select any of the text paragraphs, and then click the Refine TextTool:
To change the tone of the selected text to be more formal and impactful, in the style box, type formal impactful. This value is appended to the prompt as the REFINE_PROMPT string before invoking the model.
Click Send.
After a few seconds, a response is generated from the model and displayed on the page in an enclosed box below the original text.
View the updated paragraph text and click Replace.
To view the final value of the prompt, in the Google Cloud console Navigation menu (), select Logging > Logs Explorer.
To highlight the relevant log entries, in the results menubar, click Actions > Highlight in results.
For Highlight in results, type REFINE_PROMPT
To verify the objective, click Check my progress.
When you have completed your lab, click End Lab. Qwiklabs removes the resources you’ve used and cleans the account for you.
You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.
The number of stars indicates the following:
You can close the dialog box if you don't want to provide feedback.
For feedback, suggestions, or corrections, please use the Support tab.
In this lab you:
With these capabilities, you can build a process to update your website content, and use external storage such as Cloud Storage for review and publishing your content.
Copyright 2024 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.
This content is not currently available
We will notify you via email when it becomes available
Great!
We will contact you via email if it becomes available
One lab at a time
Confirm to end all existing labs and start this one