Checkpoint
Create a Vertex AI Search data store
/ 20
Create a Vertex AI Search app
/ 20
Create a Backend app
/ 20
Deploy the Reasoning Engine agent
/ 20
Configure the Firebase project and Flutter app
/ 20
Integrate an AI Agent with a Flutter App Using Vertex AI Agent Builder
- Overview
- Objectives
- Setup
- Task 1. Create a Vertex AI Search data store and search app
- Task 2. Build and deploy the backend
- Task 3. Configure Firebase
- Task 4. Create a Reasoning Engine AI agent
- Task 5. Enhance the backend app
- Task 6. Test drive Gemini 1.5 Pro
- Task 7. Deploy the Flutter front end app
- Task 8. Test the Flutter app
- End your lab
- Congratulations!
Overview
In this lab, you integrate a Vertex AI Agent with a Flutter app. Flutter is used as the client app framework, Vertex AI Search is used as a vector DB, and Reasoning Engine is used to build and deploy an agent with LangChain on Vertex AI. The agent uses Gemini, a family of highly capable large language models (LLMs) to generate AI responses to text and image prompts.
The lab is pre-provisioned with VSCode as the IDE using code-server, along with the Flutter and Dart extensions that are required to run the Flutter app. The lab also includes fwr, a tool that you use to serve the Flutter app as a web application which you access using a browser.
This lab is intended for developers of any experience level who contribute to building apps but might not be familiar with cloud application development. It helps to have some experience in Python, and to be familiar with the Flutter framework. It is not required to know Flutter to perform the tasks in this lab, though you will review some of the Flutter app code, and test the app's functionality.
Objectives
In this lab, you perform the following tasks:
-
Create a search data store and search app using Vertex AI Agent Builder in the Google Cloud console.
-
Deploy a Reasoning Engine agent using Vertex AI Workbench.
-
Use a Python app that integrates with Vertex AI Search and Reasoning Engine agent.
-
Deploy the app to Cloud Run and use it as the backend for a Flutter app.
-
Set up a Firebase project and connect it to the Flutter app.
Here is an overview of the different components used in this lab:
Setup
For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.
-
Sign in to Qwiklabs using an incognito window.
-
Note the lab's access time (for example,
1:15:00
), and make sure you can finish within that time.
There is no pause feature. You can restart if needed, but you have to start at the beginning. -
When ready, click Start lab.
-
Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.
-
Click Open Google Console.
-
Click Use another account and copy/paste credentials for this lab into the prompts.
If you use other credentials, you'll receive errors or incur charges. -
Accept the terms and skip the recovery resource page.
Task 1. Create a Vertex AI Search data store and search app
In this task, you implement search capability for your Flutter app by creating a search data store and search app in Vertex AI Agent Builder.
Create a search data store
-
In the Google Cloud console, click the Navigation menu (), and then select Agent Builder.
If Agent Builder is not listed in the navigation menu, click View All Products. In the All Products page, scroll to the section on Artificial Intelligence, and then click Agent Builder. -
Click Continue and activate the API.
-
In the left pane, click Data Stores.
-
On the Data Stores page, click Create Data Store.
On this page, you configure your data source to be used for search results. The lab is pre-provisioned with a Cloud Storage bucket that contains a .csv file which has data about items from the Google merchandise shop.
-
To select Cloud Storage as the source of data, under Cloud Storage, click Select.
-
For the kind of data being imported, select Structured FAQ data for a chat application (CSV).
-
To select a folder or file to import, click File.
-
To provide the Cloud Storage URL to the CSV file, click Browse.
-
To view the contents of the Cloud Storage bucket
, click . -
Select the
goog_merch.csv
file, and then click Select.The gs:// URI to the folder is populated.
-
Click Continue.
-
For the data store name, type goog-merch-ds.
-
Click Create.
Agent Builder creates the data store and starts ingesting the data from the CSV file.
-
On the Data Stores page, click the name of your newly created data store.
-
The Documents tab displays a list of documents that were imported. To view the data associated with a document, click View Document.
To verify the objective, click Check my progress.
Create a search app
To use the search data store, you connect it to a search app in Vertex AI Agent Builder.
-
In the Cloud console, click Agent Builder, then click Apps.
-
Click Create App.
-
For the App type, select Search.
-
On the Search app configuration page, configure a generic search app with these settings, leaving the remaining settings as their defaults:
Property Value (type or select) Content Generic Enterprise edition features Disable Your app name gms-search-app External name of your company or organization gms-company Location of your app global (Global) -
Click Continue.
-
A list of existing data stores is displayed. Select the goog_merch_ds data store that you created in the previous subtask.
-
To create the search app, click Create.
The search app is created and the data store is connected to the app.
-
To test the search app, in the Agent Builder navigation menu, click Preview.
-
In the Search field, type dino.
A list of related search results are displayed from the documents that were imported into the data store.
If you see an error indicating that Search preview isn't ready yet, please wait a few minutes before retrying. If you do not want to wait, you can continue with the next task in the lab.
To verify the objective, click Check my progress.
Task 2. Build and deploy the backend
The backend of our Flutter app will run as a Cloud Run service on Google Cloud. The backend service integrates with the Vertex AI search app that you created in the previous step to generate search responses from the merchandise shop data. The backend also integrates with a Reasoning Engine agent to access Gemini for generative AI content in response to queries from the Flutter app.
In this task, you build and deploy the backend Python app to Cloud Run.
Configure and review the backend app
-
An IDE based on VSCode is pre-provisioned for this lab. To access the IDE, copy the IDE Service URL from the lab's Qwiklabs credentials panel and paste it into a new incognito browser window.
The IDE service URL is the endpoint of a Cloud Run service that is pre-provisioned for this lab, and proxies requests to the code-server VM. The IDE is built using Code Server and includes the Flutter, Dart, and Vim extensions. -
Open a terminal in the IDE. In the IDE navigation menu (), click Terminal > New Terminal.
Note: Run the commands in the steps below in the IDE terminal window. -
The initial version of the backend app and related files are pre-provisioned for the lab. Copy the initial version of the backend app folder and its contents from Cloud Storage:
gcloud storage cp -r gs://cloud-training/OCBL453/photo-discovery/ag-web ~ If prompted, click Allow to paste text and images from the clipboard, and press Enter. -
To list the contents of the folder, in the IDE terminal window, run:
ls ~/ag-web/app The
ag-web/app
folder contains the application source code and other files needed to build and deploy the backend app to Cloud Run. -
Set the PROJECT_ID, LOCATION, and STAGING_BUCKET configuration for the app.
sed -i 's/GCP_PROJECT_ID/{{{project_0.project_id|set at lab start}}}/' ~/ag-web/app/app.py sed -i 's/GCP_REGION/{{{project_0.default_region|set at lab start}}}/' ~/ag-web/app/app.py sed -i 's/GCS_BUCKET/{{{project_0.startup_script.lab_bucket_name|set at lab start}}}/' ~/ag-web/app/app.py -
Configure the backend app to use the Vertex AI Search data store that you created earlier.
In the command below, replace the string {YOUR_SEARCH_DATA_STORE_ID} with the value of your Vertex AI Search data store ID.
Make sure to remove the curly braces in the
sed
command.To get the value of the search data store ID, in the Cloud console, navigate to Agent Builder > Data Stores, and then click on the name of your search data store that you created earlier. Copy the value of the Data store ID, which is same as the search engine ID. sed -i 's/SEARCH_ENGINE_ID/{YOUR_SEARCH_DATA_STORE_ID}/' ~/ag-web/app/app.py -
To view the code in the IDE, in the IDE navigation menu, click , and then click Open Folder.
-
Select the
IDE-DEV/ag-web/
folder from the list, and then click Ok. -
To trust the authors of the code, click Yes, I trust the authors.
-
In the Explorer pane, expand the
app
folder, and then clickapp.py
to open the file in the editor.The backend app code does the following:
- Initializes Vertex AI using your lab Google Cloud project ID, region, and Cloud Storage bucket.
- The
search_gms()
function uses thediscoveryengine.googleapis.com
API datastores endpoint to initiate a search request. The datastore ID that you created earlier is used in the URL. - The function uses the user supplied search query to perform the search on the contents of the datastore, and formats the results into a JSON response.
- The app uses
flask
for routing calls to the individual functions. The/
endpoint is the default page that is used to verify that the app loads successfully, while the/ask_gms
endpoint invokes thesearch_gms()
function that uses Vertex AI Search.
Build and deploy the app to Cloud Run
A deploy script is available to build and deploy the backend app to Cloud Run.
-
Open a terminal window in the IDE, and change to the backend app folder:
cd ~/ag-web/app -
Authenticate to Google Cloud from the gcloud CLI:
gcloud auth login -
To continue, type Y
-
To launch the sign-in flow, press Control (for Windows and Linux) or Command (for MacOS) and click the link in the terminal.
-
If you are asked to confirm the opening of the external website, click Open.
-
Click the lab student email address.
-
When you're prompted to continue, click Continue.
-
To let the Google Cloud SDK access your Google Account and agree to the terms, click Allow.
Your verification code is displayed in the browser tab.
-
Click Copy.
-
Back in the IDE terminal window, where it says Enter authorization code, paste the code and type Enter.
You're now signed in to Google Cloud.
-
Make the
deploy.sh
script executable, and then run the script to deploy the app in the specified Cloud Run region:chmod a+x deploy.sh; ./deploy.sh {{{project_0.project_id|set at lab start}}} {{{project_0.default_region|set at lab start}}} After the app is built and deployed to Cloud Run successfully, the app's endpoint URL is displayed at the script output end. The URL was generated by the
gcloud run deploy
command that was executed in the script.This command takes time to execute so wait until it completes before proceeding to the next task.
Test the backend app
You test the app functionality by accessing it's Cloud Run endpoint.
-
Copy the app's endpoint URL that was generated in the previous step, and navigate to that URL in a new browser tab.
-
When the app loads, the home page displays Welcome to the ag-web backend app.
-
In the browser URL bar, append the path below to the end of the URL:
/ask_gms?query=dino -
Verify that the app responds with results from the Vertex AI search data store that you created earlier.
To verify the objective, click Check my progress.
Task 3. Configure Firebase
To invoke the Gemini API, the front end Flutter app uses the Vertex AI for Firebase Dart SDK. This SDK is available as the firebase_vertexai package from pub.dev which is the official package repository for Dart and Flutter apps.
Firebase is a backend app development platform from Google that provides services for various platforms that include Android, iOS, and others.
In this task, you set up Firebase, and sign in to the Firebase CLI. Later in the lab you'll connect your Flutter app to Firebase.
Set up Firebase
To use the SDK, you first set up a Firebase project.
-
In an incognito window tab, access the Firebase console at https://console.firebase.google.com.
-
Click Create a project.
-
To add Firebase to your lab's existing Google Cloud project, at the bottom of the page, click Add Firebase to Google Cloud project.
-
Click Select a Google Cloud project, and then select the lab project from the list.
-
Accept the terms, and then click Continue.
-
To use the default billing plan, click Confirm plan.
-
Click Continue.
-
Disable Google Analytics for this project, and then click Add Firebase.
-
Wait until your Firebase project is ready, and then click Continue.
Firebase is now ready to use with your lab Google Cloud project.
Sign in to Firebase
-
In your IDE terminal window, sign in to the Firebase CLI:
firebase login --no-localhost -
To allow Firebase to collect CLI and Emulator reporting information, type Y, and press Enter.
-
Note your session ID, and then follow the steps to launch the sign-in flow. Press Control (for Windows and Linux) or Command (for MacOS) and click the URL in the terminal.
-
When prompted to open the external website, click Open.
-
Click your lab student email ID.
-
Click Continue.
-
To grant relevant permissions to the Firebase CLI, click Allow.
-
Click Yes, I just ran this command.
-
Verify your session ID, and then click Yes, this is my session ID.
-
To copy the authorization code, click Copy.
-
Paste the authorization code where requested in the IDE terminal window, and press Enter.
You are now signed in to the Firebase CLI.
Task 4. Create a Reasoning Engine AI agent
An AI Agent is an application that uses the power of large language models (LLMs) for reasoning and orchestration with external tools to achieve its goal. Vertex AI Agent Builder is a suite of products and tools from Google that helps you build AI agents by connecting them to your trusted data sources.
Vertex AI Reasoning Engine (also known as LangChain on Vertex AI) helps you build and deploy a reasoning agent with Vertex AI Agent Builder. LangChain is a popular OSS tool used to build chatbots and RAG systems.
Reasoning Engine lets developers use Function Calling, to map the output from LLMs (Gemini) to Python functions. Reasoning Engine integrates closely with the Python SDK for the Gemini model in Vertex AI, and is compatible with LangChain and other Python frameworks.
In this task, you use Vertex AI Workbench with a Jupyter notebook to deploy a Reasoning Engine agent with Python functions. The agent will be used in our generative AI backend application in the next task in this lab.
Create a Vertex AI Workbench instance
Vertex AI Workbench is a Jupyter notebook-based development environment for the entire data science and machine learning workflow. You can interact with Vertex AI and other Google Cloud services from within a Vertex AI Workbench instance's Jupyter notebook. For example, Vertex AI Workbench lets you access and explore your data from within a Jupyter notebook by using BigQuery and Cloud Storage integrations.
Vertex AI Workbench instances come with a preinstalled suite of deep learning packages, including support for the TensorFlow and PyTorch frameworks.
Vertex AI Workbench notebooks provide a flexible and scalable solution for developing and deploying ML models on Google Cloud.
-
To create a Workbench instance, in the Cloud console, click the Navigation menu (), and then select Vertex AI > Workbench.
-
If prompted in the Cloud console, click to enable the Notebooks API.
-
Make sure that Instances is selected in the Instances tab, and then click Create New.
-
In the New instance page, for Name, type my-instance
-
Leave the remaining settings as their defaults, and then click Create.
Your new instance spins up in the instances section. Wait for the instance to be created before proceeding to the next step.
A green check appears next to the instance when it is ready to use. -
Once the instance is ready, click Open Jupyterlab.
-
To launch a Python 3 Jupyter Notebook, click the Python 3 notebook.
Copy a notebook file
A notebook that builds and deploys a Reasoning Engine agent has been pre-provisioned for this lab.
-
To copy the notebook to your JupyterLab instance, copy this code into the first cell in your new notebook:
!gcloud storage cp gs://cloud-training/OCBL453/photo-discovery/ag-web/ag_setup_re.ipynb . -
Select the cell and run it by clicking Run in the cell menu:
The command copies the notebook from Cloud Storage to your JupyterLab instance. After the command completes, the notebook file is listed under the top-level root folder.
-
To open the notebook, double-click the notebook file in the folder listing. A separate tab is created with the content of the notebook.
Build and deploy a Reasoning Engine agent
-
Run all the code cells in the notebook in order from the beginning. To run a cell, select it by clicking anywhere in the cell, and then click Run in the cell menu or in the top notebook menu.
Note: When running the command in a cell, wait for the command to complete before moving on to the next cell. When a command completes execution, the asterisk (*) in the cell number field is replaced by the number of the cell. Note: Before running a cell, please check the notes below for any cell-specific instructions or steps to follow. These instructions are marked with the CELL INSTR prefix. a. [CELL INSTR] Restart current runtime
If prompted, in the Kernel Restarting dialog, click Ok.
b. [CELL INSTR] Set Google Cloud project information and initialize Vertex AI SDK:
Before running this cell to initialize the Vertex AI SDK, update the configuration values for your lab project_id, location, and staging bucket:
PROJECT_ID = "{{{project_0.project_id|set at lab start}}}" LOCATION = "{{{project_0.default_region|set at lab start}}}" STAGING_BUCKET = "gs://{{{project_0.startup_script.lab_bucket_name}}}" Updated cell with sample configuration settings:
c. [CELL INSTR] Define Tool function for Vertex AI Search:
Before running this cell, replace the value of the GOOGLE_SHOP_VERTEXAI_SEARCH_URL with the Cloud Run endpoint URL of your backend app.
To fetch the endpoint URL, in the Cloud console, navigate to Cloud Run, and then click the name of the backend app: ag-web. Copy the value of the endpoint URL and replace it in the cell.
Updated cell with sample Cloud Run endpoint URL of the backend app:
You can also run a cell by clicking Run in the notebook menubar. -
After running the cell Deploy the Agent to the Reasoning Engine runtime, wait for the command to complete and create the Reasoning Engine agent. Then, copy the Reasoning Engine ID:
You will use this ID to configure and re-deploy the backend app in the next task.
Note: This cell can take up to 10 minutes to complete. Wait for the final output to be displayed indicating that the ReasoningEngine is created before continuing to the next step. -
To test the successful operation of the Reasoning Engine agent, run the next two cells and observe the output.
The Reasoning Engine agent has successfully invoked either the Wikipedia or the Vertex AI Search datastore based on the query input and the output from the Gemini model.
To verify the objective, click Check my progress.
Task 5. Enhance the backend app
Let's now enhance the backend app to invoke the Reasoning Engine agent that you deployed in the previous task.
The initial version of the backed app only fetched results directly from Vertex AI Search. In the new version, the app will invoke the Reasoning Engine agent that uses output from Gemini and the agent's tools to generate a response from Vertex AI Search or Wikipedia based on the input prompt.
In this task, you update the backend app code to add an additional entry point that invokes the Reasoning Engine agent with a request query and return the agent's response.
Update the backend app
-
In the IDE terminal window, append the
app.py
file with the new entry point code by running the command:cat << EOF >> ~/ag-web/app/app.py # # Reasoning Engine # NB_R_ENGINE_ID = "REASONING_ENGINE_ID" from vertexai.preview import reasoning_engines remote_agent = reasoning_engines.ReasoningEngine( f"projects/{PROJECT_ID}/locations/{LOCATION}/reasoningEngines/{NB_R_ENGINE_ID}" ) # Endpoint for the Flask app to call the Agent @app.route("/ask_gemini", methods=["GET"]) def ask_gemini(): query = request.args.get("query") log.info("[ask_gemini] query: " + query) retries = 0 resp = None while retries < MAX_RETRIES: try: retries += 1 resp = remote_agent.query(input=query) if (resp == None) or (len(resp["output"].strip()) == 0): raise ValueError("Empty response.") break except Exception as e: log.error("[ask_gemini] error: " + str(e)) if (resp == None) or (len(resp["output"].strip()) == 0): raise ValueError("Too many retries.") return "No response received from Reasoning Engine." else: return resp["output"] EOF -
Configure the backend app to use the Reasoning Engine agent that you created earlier.
Replace the string {YOUR_REASONING_ENGINE_ID} in the command below with the value of your Reasoning Engine ID that you copied from the notebook cell in the previous task, and run the command below in the IDE terminal window.
Make sure to remove the curly braces in the
sed
command.sed -i 's/REASONING_ENGINE_ID/{YOUR_REASONING_ENGINE_ID}/' ~/ag-web/app/app.py -
To view the code in the IDE, in the IDE navigation menu, click , and select the
IDE-DEV/ag-web/app/app.py
file.In the enhanced version of the backend app:
- A handle on the remote_agent is retrieved from the Reasoning Engine runtime using the
REASONING_ENGINE_ID
of the agent that you created in the previous task. - A new
/ask_gemini
endpoint that defines the `ask_gemini() function is defined. - The function passes the user supplied
query
parameter from the request to the Reasoning Engine (remote_agent), and returns the response from the agent.
- A handle on the remote_agent is retrieved from the Reasoning Engine runtime using the
Build and re-deploy the backend app to Cloud Run
-
Change to the backend app folder:
cd ~/ag-web/app -
Run the script to re-deploy the app in the specified Cloud Run region:
./deploy.sh {{{project_0.project_id|set at lab start}}} {{{project_0.default_region|set at lab start}}}
Test the backend app
You test the app functionality by accessing it's Cloud Run endpoint.
-
Copy the app's endpoint URL that was generated in the previous step, and navigate to that URL in a new browser tab.
-
When the app loads, the home page displays Welcome to the ag-web backend app.
-
In the browser URL bar, append the path below to the end of the URL:
/ask_gemini?query=where can I buy the dino pin The app responds with results from the agent that fetched results from the Vertex AI search data store that you created earlier.
Chrome Dino Enamel Pin is a product sold at Google Merch Shop. The price is 7.00 USD. You can buy the product at their web site: https://shop.merch.google/product/chrome-dino-enamel-pin-ggoegcbb203299/. -
In the browser URL bar, replace the path with:
/ask_gemini?query=what is fallingwater The app responds with results from the agent that fetched a response from the Wikipedia API that you configured in the notebook.
Fallingwater was designed by architect Frank Lloyd Wright in 1935. It is located in southwest Pennsylvania, about 70 miles southeast of Pittsburgh. The house is famous because it was built partly over a waterfall.
Task 6. Test drive Gemini 1.5 Pro
Now that you've developed and deployed your backend app, Vertex AI Search components, and a Reasoning Engine agent, you can now begin to design and build the Flutter front end.
The Flutter app enables users to discover more information about photos which they upload in the app. The app integrates with Gemini to generate responses about the photos, and uses an AI agent to provide responses using a chat interface.
Before you develop the front end app, you need to define the structure of the responses that are returned from the Gemini 1.5 Pro model. This model is used by the AI agent that you deployed earlier in the lab.
In this task, you prompt the Gemini 1.5 Pro model, and view the model response in JSON format.
-
Gemini 1.5 Pro is available in Vertex AI Studio in the Cloud console. To access Vertex AI Studio, click the navigation menu () and then select Vertex AI > Overview (under Vertex AI Studio).
-
To enable the APIs that are required to use Vertex AI Studio, click Agree and Continue.
Vertex AI Studio is a tool in the Google Cloud console that lets you quickly test and prototype generative AI models so you can leverage their capabilities in your applications. -
On the Vertex AI Studio overview page, click Open Freeform.
-
In the configuration section, for Model, select gemini-1.5.pro-001.
-
For the Prompt, click Insert Media, and then select Import from Cloud Storage.
-
Click > next to the name of the lab bucket
. -
Select the image file fallingwater.jpg, and then click Select.
The image is uploaded into the Prompt box.
-
To prompt the model, in the prompt box below the image, type What is this?, and then click Send ().
The model provides a brief response correctly describing the object in the image.
-
To fetch more information from the model in JSON format, modify the text prompt:
What is this? Provide the name and description of the object, and be as specific as possible. Also include a list of 3 follow-up questions that I can ask for more information about this object. Generate the response in JSON format with the keys "name", "description", and "suggestedQuestions". -
Click Send ().
-
Disable the Response Markdown setting, and view the JSON response that is generated by the model.
{ "name": "Fallingwater", "description": "This is Fallingwater, a house designed by renowned architect Frank Lloyd Wright. It's built over a waterfall on Bear Run in rural southwestern Pennsylvania, and is considered a masterpiece of organic architecture, seamlessly blending with its natural surroundings.", "suggestedQuestions": [ "Who commissioned Frank Lloyd Wright to design Fallingwater?", "What are some of the key architectural features of Fallingwater?", "Is Fallingwater open to the public, and if so, how can I visit?" ] } You now have a prompt and model response in JSON format which we can use to build your flutter app.
Note: The model response you receive might differ from the response displayed above.
Task 7. Deploy the Flutter front end app
Now that you have the structure of the data that is returned by the model, you can build the front end app.
Flutter is an open-source multi-platform app development framework. It lets you write one codebase that runs on Android, iOS, Web, macOS, Windows, and Linux. Flutter apps are developed in the Dart programming language.
For the purposes of this lab, you will use web as the target platform, so that the Flutter app can be deployed as a web application and accessed using a browser.
In this task, you will review the Flutter app code, and build and run the app as a web application.
Review the Flutter app codebase
One of the app functions lets users upload a photo of a tourist attraction, and then queries Gemini to fetch information about the photo. For the purpose of this lab, the Flutter app has been developed and made available in Cloud Storage.
-
In your IDE terminal, download the Flutter app code:
gcloud storage cp -r gs://cloud-training/OCBL453/photo-discovery/app ~ -
To fetch the app project dependencies, in the IDE terminal window, run:
cd ~/app; flutter pub get -
To view the code in the IDE, in its navigation menu (), click File, and then select Open Folder.
-
Select the
/home/ide-dev/app/
from the list, and then click Ok. -
To trust the authors of the files in the folder, click Yes, I trust the authors.
Here is a high-level overview of the
app
folder contents:Folder/File Description app Project root folder that contains the sub-folders and files that make up the Flutter app. android/ios/linux/macos/web/windows Contains platform-specific files that are needed to run the Flutter app on each supported platform. lib Contains the core application Dart files that includes the functionality, routing, data models, and user interface. test Contains Dart tests that are used to test the app's widgets. pubspec.yaml Contains the app dependencies, Flutter version, and other configuration settings. analysis_options.yaml Contains configuration settings for static analysis of the app. Let's review some of the files in these folders.
-
In the Explorer menu, click the
/app/lib/models
folder, then click themetadata.dart
file to open it.The
metadata.dart
file contains the data model that is used by the Flutter app.class Metadata { String name = ''; String description = ''; List suggestedQuestions = []; // more code follows ... } The metadata object attributes maps to the JSON attributes that are returned in the response from the Gemini API. The
fromJson
method initializes the object attributes when it is invoked. -
In the Explorer menu, click the
/app/lib/main.dart
file to open it.This is the application entry point for the Flutter app. After performing initialization tasks, the code in this file builds the app's UI using material components, sets the app's title, theme, and routing configuration.
A Material app starts with the MaterialApp widget, which builds a number of useful widgets at the root of your app, including a Navigator that manages routing to various screens in your app. -
To view the app's routing configuration, in the Explorer menu, click the
/app/lib/functionality/routing.dart
file to open it.The code in this file defines the
/
,/chat
, and/settings
routes for the app.Flutter uses the concept of Widgets, which are declarative classes that are used to build the app. In Flutter, you create a layout by composing widgets to build more complex widgets to form a widget tree. To learn more about building UI's in Flutter, view the documentation.
-
Let's review the code that is responsible for some of the app's UI screens and functionality. In the Explorer menu, click the
/app/lib/ui/screens/quick_id.dart
file to open it.This file contains the classes that build the app's widgets. It includes the class
GenerateMetadataScreen
that builds the app's main page and is invoked from the/
path that is defined in therouting.dart
file. The UI lets the user upload an image from their computer or mobile device, or take a photo from the device camera.Other UI screens that are used in the app are for settings and the chat page which are implemented in the
settings.dart
andchat.dart
files in theapp/lib/ui/screens/
folder.During app initialization, the Vertex AI for Firebase Dart SDK is used to fetch an instance of the Gemini genAI model that will be used by the app. This is implemented in the
_GenerateMetadataScreenState
class.void initState() { super.initState(); model = FirebaseVertexAI.instance.generativeModel( model: geminiModel, generationConfig: GenerationConfig( temperature: 0, responseMimeType: 'application/json', ), ); } After the user selects an image, the app invokes the Vertex AI Gemini API with the image and a text prompt. The text prompt used is the same as that you tested in an earlier task when test driving the Gemini 1.5 Pro model.
The
_sendVertexMessage()
method contains the code that sends the prompt. Here is partial code from the method:Future _sendVertexMessage() async { ... final messageContents = Content.multi( [ TextPart( 'What is the subject in this photo? Provide the name of the photo subject, and description as specific as possible, and 3 suggested questions that I can ask for more information about this object. Answer in JSON format with the keys "name", "description" and "suggestedQuestions".'), DataPart('image/jpeg', _image!), ], ); var response = await model.generateContent([messageContents]); var text = response.text; if (text == null) { _showError('No response from API.'); return; } else { var jsonMap = json.decode(text); if (mounted) { context.read().updateMetadata(Metadata.fromJson(jsonMap)); } } ... } The JSON response from the model is then decoded to extract the
name
,description
, andsuggestedQuestions
, which are saved locally and displayed in the app UI.
Connect the Flutter app to Firebase
-
Change to the
app
root project folder, and activate the flutterfire_cli package:dart pub global activate flutterfire_cli -
To register your Flutter app with Firebase, run:
flutterfire configure --project={{{project_0.project_id|set at lab start}}} After executing the command, a
firebase_options.dart
configuration file is added to thelib
folder of your Flutter project. -
To configure the supported platforms for the app, press space to deselect all platforms except web. Scroll through the platforms using the arrow key. Then press Enter.
Your Flutter web app is now registered for use with Firebase.
Configure the Flutter app chat feature
The Flutter app has a chat feature that lets users find out more information about the image that they uploaded in the app, or request information from other external data sources that might have more updated information. To do this, we need a way to connect or ground
the Gemini model to those data sources.
To implement this feature, the Flutter app integrates with the Reasoning Engine agent that you deployed earlier in this lab. Integration is achieved by connecting the Flutter app to the endpoint of the backend app that you also built and deployed to Cloud Run.
-
First, fetch your Cloud Run endpoint hostname of the backend app, and store the value in an environment variable. In the IDE terminal window, run:
BACKEND_APP_HOST=$(gcloud run services describe ag-web --region {{{project_0.default_region|set at lab start}}} --format 'value(status.url)' | cut -d'/' -f3); echo $BACKEND_APP_HOST -
For the Flutter app, the Gemini model and backend app endpoint is configured in the
~/app/lib/config.dart
file. Update the configuration entry in the file:sed -i "s/cloudRunHost = .*/cloudRunHost = '$BACKEND_APP_HOST';/" ~/app/lib/config.dart -
To verify the change, in the IDE Explorer menu, click the
IDE-DEV/app/lib/config.dart
file to open it. Verify that thecloudRunHost
config entry value is updated according to the sample below.Note: The value should match the value of the `BACKEND_APP_HOST` environment variable that you set in a previous step.
Review the chat feature code
Let's review the code in the Flutter app that implements the chat feature.
-
In the IDE Explorer menu, click the
/app/lib/ui/screens/chat.dart
file to open it.The class
ChatPage
builds the Flutter widget for the chat page in the UI. This page is displayed in the UI when the Tell me more button is pressed.To build the chat UI, the app uses the flutter_chat_ui_package that implements most of the chat boilerplate code which you can customize. The
build()
method uses theChat
class to construct the widget.Widget build(BuildContext context) { ... Chat( typingIndicatorOptions: TypingIndicatorOptions( typingUsers: typingUsers, ), listBottomWidget: suggestionsWidget, messages: _messages, onSendPressed: _handleSendPressed, showUserAvatars: true, showUserNames: true, user: _user, theme: DefaultChatTheme( receivedMessageBodyTextStyle: TextStyle( color: Theme.of(context).colorScheme.onSurface, ), sentMessageBodyTextStyle: TextStyle( color: Theme.of(context).colorScheme.onSecondary, ), userAvatarNameColors: [ Theme.of(context).colorScheme.primary, ], backgroundColor: Theme.of(context).colorScheme.surfaceContainerHigh, primaryColor: Theme.of(context).colorScheme.primary, secondaryColor: Theme.of(context).colorScheme.surface, ), ), ... } The
listBottomWidget
is used to display the list of suggested questions that are returned in the response from the previous Gemini call, enabling the user to select a question as the chat message.The
_handleSendPressed()
callback method is invoked when the user clicks Send in the chat window. The method constructs a new message, adds the message to the message list, and sends the message to the backend using theaskAgent()
method. -
Scroll the code to find the
askAgent()
method.To invoke Gemini using the Reasoning Engine agent, the
askAgent()
method sends a request to the/ask_gemini
URL at the Cloud Run endpoint of the backend app. The request query parameters include the imagename
anddescription
that were returned in the previous call to Gemini, and the user's message.Future askAgent( String name, String description, String question) async { var query = 'The photo is $name. $description. $question.'; var endpoint = Uri.https(cloudRunHost, '/ask_gemini', {'query': query}); var response = await http.get(endpoint); if (response.statusCode == 200) { var responseText = convert.utf8.decode(response.bodyBytes); return responseText.replaceAll(RegExp(r'\*'), ''); } return 'Sorry I can\'t answer that.'; } The response from the backend is then added to the list of messages in the chat window by the calling function
_sendMessageToAgent()
.
Deploy the Flutter app
Now that the Flutter app configuration is complete, you can build and deploy the app. For the purpose of this lab, to run the app as a web application, we use Fwr which is a development server for Flutter web.
-
Make sure you are in the Flutter app folder. In the IDE terminal window, run:
cd ~/app -
To fetch the app project dependencies, run:
flutter pub get -
To build the project and start the web server, run:
fwr Wait for the server to start and serve the Flutter app. Once started, the output from the command should be similar to:
To verify the objective, click Check my progress.
Task 8. Test the Flutter app
Let's test the Flutter app's functionality.
Download test images
-
To test the app, first download and save an image to your computer. Copy the URL to view the image in a separate browser tab:
-
Right-click the image and save it to your computer.
-
Repeat these steps to download an image of the Google Chrome Dino pin from the URL below:
Test the app
-
To access the Flutter app, copy the Live Server URL from the Qwiklabs credentials panel, and paste it in a new browser tab.
-
Wait a few seconds for the app to load. Then, to upload the Fallingwater image, click Choose from Library.
After you upload the image, the app invokes the Vertex AI Gemini API to generate a response that contains the name, and description of the image which are displayed in the app's UI.
It might take a few seconds for the name and description fields in the app UI to populate from the Gemini model response.
Note: If you receive a permission denied error, please ignore it, and click the ok button to proceed. Click Remove image, and then re-upload the image. -
Click Tell me more.
The chat page loads and displays the Suggested Questions that were returned in the response from the previous call to the Gemini API.
Note: If suggested questions are not displayed, refresh the page, and then repeat the previous two steps to upload the image. -
Click on one of the suggested questions and view the response from the chat agent.
The agent makes an HTTPS call to the Cloud Run endpoint of the backend app, which then invokes the Reasoning Engine agent that uses Gemini to return a response from Wikipedia.
Test the app with merchandise data
-
In the chat UI, click Remove image.
-
To upload the Google Chrome Dino pin image, click Choose from Library, and upload the image you saved earlier.
After you upload the image, the app invokes the Vertex AI Gemini API to generate a response that contains the name, and description of the image which are displayed in the app's UI.
-
Click Tell me more.
-
In the chat message box, type:
Where can I buy this pin? The agent makes an HTTPS call to the Cloud Run endpoint of the backend app, which then invokes the Reasoning Engine agent that uses Gemini to return a response from the Vertex AI search data store.
-
(Optional) To see the Flutter app snap to a mobile layout, resize your browser window to roughly the size of a mobile device:
This indicates that the Flutter app is responsive to changes based on the screen or window size. Flutter has widgets and packages that help make your app responsive and adaptive to changes based on device configuration.
End your lab
When you have completed your lab, click End Lab. Qwiklabs removes the resources you’ve used and cleans the account for you.
You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.
The number of stars indicates the following:
- 1 star = Very dissatisfied
- 2 stars = Dissatisfied
- 3 stars = Neutral
- 4 stars = Satisfied
- 5 stars = Very satisfied
You can close the dialog box if you don't want to provide feedback.
For feedback, suggestions, or corrections, please use the Support tab.
Congratulations!
In this lab, you:
- Created a search data store and search app using Vertex AI Agent Builder in the Google Cloud console.
- Deployed a Reasoning Engine agent using Vertex AI Workbench.
- Build a Python app that integrates with Vertex AI Search and Reasoning Engine agent.
- Deploy the app to Cloud Run and use it as the backend for a Flutter app.
- Set up a Firebase project and connect it to the Flutter app.
- View the Flutter app code to understand some of the functionality.
- Run and test the Flutter app.
Next steps/learn more
Copyright 2024 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.