Checkpoints
Generate content for the image
/ 100
Build an AI Image Recognition app using Gemini on Vertex AI
bb-ide-genai-001
Overview
- Labs are timed and cannot be paused. The timer starts when you click Start Lab.
- The included cloud terminal is preconfigured with the gcloud SDK.
- Use the terminal to execute commands and then click Check my progress to verify your work.
Objective
Generative AI on Vertex AI (also known as genAI or gen AI) gives you access to Google's large generative AI models so you can test, tune, and deploy them for use in your AI-powered applications. In this lab, you will:
- Connect to Vertex AI (Google Cloud AI platform): Learn how to establish a connection to Google's AI services using the Vertex AI SDK.
- Load a pre-trained generative AI model -Gemini: Discover how to use a powerful, pre-trained AI model without building one from scratch.
- Send image + text questions to the AI model: Understand how to provide input for the AI to process.
- Extract text-based answers from the AI: Learn to handle and interpret the text responses generated by the AI model.
- Understand the basics of building AI applications: Gain insights into the core concepts of integrating AI into software projects.
Working with Vertex AI Python SDK
After starting the lab, you will get a split pane view consisting of the Code Editor on the left side and the lab instructions on the right side. Follow these steps to interact with the Generative AI APIs using Vertex AI Python SDK.
- Click File->New File to open a new file within the Code Editor.
- Copy and paste the provided code snippet into your file.
- Click File->Save, choose
genai.py
for the Name field and click Save. - Execute the Python file by clicking the triangle icon on the Code Editor or by invoking the below command inside the terminal within the Code Editor pane to view the output.
Code Explanation
- The code snippet is loading a pre-trained AI model called Gemini (gemini-1.0-pro-vision) on Vertex AI.
- The code calls the
generate_content
method of the loaded Gemini model. - The input to the method is an image URI and a prompt containing a question about the image.
- The code uses Gemini's ability to understand images and text together. It uses the text provided in the prompt to describe the contents of the image.
Try it yourself! Experiment with different image URIs and prompt questions to explore Gemini's capabilities.
Click Check my progress to verify the objective.
Congratulations!
You have completed the lab! Congratulations!!
Copyright 2024 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.