arrow_back

Using Gemini for Multimodal Retail Recommendations

Sign in Join
Test and share your knowledge with our community!
done
Get access to over 700 hands-on labs, skill badges, and courses

Using Gemini for Multimodal Retail Recommendations

Lab 1 hour universal_currency_alt 5 Credits show_chart Intermediate
info This lab may incorporate AI tools to support your learning.
Test and share your knowledge with our community!
done
Get access to over 700 hands-on labs, skill badges, and courses

GSP1230

Google Cloud self-paced labs logo

Overview

Gemini is a family of generative AI models developed by Google DeepMind that is designed for multimodal use cases. The Gemini API gives you access to the Gemini Pro Vision and Gemini Pro models.

For retail companies, recommendation systems improve customer experience and thus can increase sales. In this lab, you will learn how to use the Gemini Pro Vision model to rapidly create a multimodal recommendation system. The Gemini Pro Vision model can provide both recommendations and explanations using a multimodal model.

In this lab, you will begin with a scene (e.g. a living room) and use the Gemini Pro Vision model to perform visual understanding. You will also investigate how the Gemini Pro Vision model can be used to recommend an item (e.g. a chair) from a list of furniture items as input.

Vertex AI Gemini API

The Vertex AI Gemini API provides a unified interface for interacting with Gemini models. There are currently two models available in the Gemini API:

  1. Gemini Pro model (gemini-pro): Designed to handle natural language tasks, multiturn text and code chat, and code generation.
  2. Gemini Pro Vision model (gemini-pro-vision): Supports multimodal prompts. You can include text, images, and video in your prompt requests and get text or code responses.

You can interact with the Gemini API using the following methods:

  • Use the Vertex AI Studio for quick testing and command generation
  • Use cURL commands
  • Use the Vertex AI SDK

This lab focuses on the multimodal capabilities of the Gemini Pro Vision model.

For more information, see the Generative AI on Vertex A documentation.

Objectives

In this lab, you will learn how to:

  • Use the Gemini Pro Vision model (gemini-pro-vision) to perform visual understanding
  • Take multimodality into consideration in prompting for the Gemini Pro Vision model
  • Create a retail recommendation application using the Gemini Pro Vision model

Setup and requirements

Before you click the Start Lab button

Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.

This hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.

To complete this lab, you need:

  • Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito or private browser window to run this lab. This prevents any conflicts between your personal account and the Student account, which may cause extra charges incurred to your personal account.
  • Time to complete the lab---remember, once you start, you cannot pause a lab.
Note: If you already have your own personal Google Cloud account or project, do not use it for this lab to avoid extra charges to your account.

How to start your lab and sign in to the Google Cloud console

  1. Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:

    • The Open Google Cloud console button
    • Time remaining
    • The temporary credentials that you must use for this lab
    • Other information, if needed, to step through this lab
  2. Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).

    The lab spins up resources, and then opens another tab that shows the Sign in page.

    Tip: Arrange the tabs in separate windows, side-by-side.

    Note: If you see the Choose an account dialog, click Use Another Account.
  3. If necessary, copy the Username below and paste it into the Sign in dialog.

    {{{user_0.username | "Username"}}}

    You can also find the Username in the Lab Details panel.

  4. Click Next.

  5. Copy the Password below and paste it into the Welcome dialog.

    {{{user_0.password | "Password"}}}

    You can also find the Password in the Lab Details panel.

  6. Click Next.

    Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials. Note: Using your own Google Cloud account for this lab may incur extra charges.
  7. Click through the subsequent pages:

    • Accept the terms and conditions.
    • Do not add recovery options or two-factor authentication (because this is a temporary account).
    • Do not sign up for free trials.

After a few moments, the Google Cloud console opens in this tab.

Note: To view a menu with a list of Google Cloud products and services, click the Navigation menu at the top-left. Navigation menu icon

Task 1. Open the notebook in Vertex AI Workbench

  1. In the Google Cloud Console, on the Navigation menu, click Vertex AI > Workbench.

  2. On the User-Managed Notebooks page, find the generative-ai-jupyterlab notebook and click on the Open JupyterLab button.

The JupyterLab interface opens in a new browser tab.

Task 2. Open the generative-ai folder

  1. Navigate to the generative-ai folder on the left hand side of the notebook.

  2. Navigate to the /gemini/use-cases/retail folder.

  3. Open the multimodal_retail_recommendations.ipynb file.

  4. Run through the Getting Started and Import libraries sections of the notebook.

    • For Project ID, use , and for the Location, use .
Note: you can skip any notebook cells that are noted Colab only.

Click Check my progress to verify the objective.

Install Vertex AI SDK for Python and import libraries.

In the following sections, you will run through the notebook cells to see how to use the multimodal capabilities of the Gemini Pro Vision model.

Task 3. Use the Gemini Pro Vision model

The Gemini Pro Vision model (gemini-pro-vision) is a multimodal model that supports adding image and video in text or chat prompts for a text response.

  1. In this task, run through the notebook cells to see how to use the Gemini Pro Vision model to describe a room in details from its image, combining text and image in a single prompt.

Click Check my progress to verify the objective.

Use Gemini Pro Vision model to describe a room.

Task 4. Generate open recommendations based on built-in knowledge

Using the same image, you can ask the model to recommend a piece of furniture that would fit in it alongside with the description of the room. Note that the model can choose any furniture to recommend in this case, and can do so from its only built-in knowledge.

  1. Using the same image, run through the notebook cells to see how to use the Gemini Pro Vision model to recommend a piece of furniture that would fit in the room, alongside with the description of the room.

Click Check my progress to verify the objective.

Use Gemini Pro Vision model to recommend a piece of furniture.

Task 5. Generate recommendations based on provided images

Instead of keeping the recommendation open, you can also provide a list of items for the model to choose from. In this section, you will download a few chair images and set them as options for the Gemini model to recommend from. This is particularly useful for retail companies who want to provide recommendations to users based on the kind of room they have, and the available items that the store offers.

  1. In this task, run through the notebook cells to see how to use the Gemini Pro Vision model to recommend a piece of furniture from a list of items.

Click Check my progress to verify the objective.

Use Gemini Pro Vision model to recommend an item from a selection.

Congratulations!

Congratulations! In this lab, you have successfully explored how to build a multimodal recommendation system using Gemini for furniture. You have learned how to use the Gemini Pro Vision model to perform visual understanding and how to take multimodality into consideration in prompting for the Gemini Pro Vision model. This lab showed how you can easily build a multimodal recommendation system using Gemini for furniture, but you can also use the similar approach in:

  • Recommending clothes based on an occasion or an image of the venue
  • Recommending wallpaper based on the room and settings

Next steps / learn more

Google Cloud training and certification

...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.

Manual Last Updated August 21, 2024

Lab Last Tested August 21, 2024

Copyright 2024 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

This content is not currently available

We will notify you via email, when it becomes available

Great!

We will contact you via email, if it becomes available