Labs are timed and cannot be paused. The timer starts when you click Start Lab.
The included cloud terminal is preconfigured with the gcloud SDK.
Use the terminal to execute commands and then click Check my progress to verify your work.
Objective
Generative AI on Vertex AI (also known as genAI or gen AI) gives you access to Google's large generative AI models so you can test, tune, and deploy them for use in your AI-powered applications. In this lab, you will:
Connect to Vertex AI (Google Cloud AI platform): Learn how to establish a connection to Google's AI services using the Vertex AI SDK.
Load a pre-trained generative AI model -Gemini: Discover how to use a powerful, pre-trained AI model without building one from scratch.
Send text to the AI model: Understand how to provide input for the AI to process.
Extract chat responses from the AI: Learn how to handle and interpret the chat responses generated by the AI model.
Understand the basics of building AI applications: Gain insights into the core concepts of integrating AI into software projects.
Working with Generative AI
After starting the lab, you will get a split pane view consisting of the Code Editor on the left side and the lab instructions on the right side. Follow these steps to interact with the Generative AI APIs using Vertex AI Python SDK.
Chat responses without using stream:
Streaming involves receiving responses to prompts as they are generated. That is, as soon as the model generates output tokens, the output tokens are sent. A non-streaming response to prompts is sent only after all of the output tokens are generated.
First we'll explore the chat responses without using stream.
Create a new file to get the chat responses without using stream:
Click File > New File to open a new file within the Code Editor.
Copy and paste the provided code snippet into your file.
from google import genai
from google.genai.types import HttpOptions, ModelContent, Part, UserContent
import logging
from google.cloud import logging as gcp_logging
# ------ Below cloud logging code is for Qwiklab's internal use, do not edit/remove it. --------
# Initialize GCP logging
gcp_logging_client = gcp_logging.Client()
gcp_logging_client.setup_logging()
client = genai.Client(
vertexai=True,
project='{{{ project_0.project_id | "project-id" }}}',
location='{{{ project_0.default_region | "REGION" }}}',
http_options=HttpOptions(api_version="v1")
)
chat = client.chats.create(
model="gemini-2.0-flash-001",
history=[
UserContent(parts=[Part(text="Hello")]),
ModelContent(
parts=[Part(text="Great to meet you. What would you like to know?")],
),
],
)
response = chat.send_message("What are all the colors in a rainbow?")
print(response.text)
response = chat.send_message("Why does it appear when it rains?")
print(response.text)
Click File > Save, enter SendChatwithoutStream.py for the Name field and click Save.
Execute the Python file by running the below command inside the terminal within the Code Editor pane to view the output.
/usr/bin/python3 /SendChatwithoutStream.py
Code Explanation
The code snippet is loading a pre-trained AI model called Gemini (gemini-2.0-flash-001) on Vertex AI.
The code calls the send_message method of the loaded Gemini model.
The code uses Gemini's ability to chat. It uses the text provided in the prompt to chat.
Chat responses with using stream:
Now we'll explore the chat responses using stream.
Create a new file to get the chat responses with using stream:
Click File > New File to open a new file within the Code Editor.
Copy and paste the provided code snippet into your file.
from google import genai
from google.genai.types import HttpOptions
import logging
from google.cloud import logging as gcp_logging
# ------ Below cloud logging code is for Qwiklab's internal use, do not edit/remove it. --------
# Initialize GCP logging
gcp_logging_client = gcp_logging.Client()
gcp_logging_client.setup_logging()
client = genai.Client(
vertexai=True,
project='{{{ project_0.project_id | "project-id" }}}',
location='{{{ project_0.default_region | "REGION" }}}',
http_options=HttpOptions(api_version="v1")
)
chat = client.chats.create(model="gemini-2.0-flash-001")
response_text = ""
for chunk in chat.send_message_stream("What are all the colors in a rainbow?"):
print(chunk.text, end="")
response_text += chunk.text
Click File > Save, enter SendChatwithStream.py for the Name field and click Save.
Execute the Python file by running the below command inside the terminal within the Code Editor pane to view the output.
/usr/bin/python3 /SendChatwithStream.py
Code Explanation
The code snippet is loading a pre-trained AI model called Gemini (gemini-2.0-flash-001) on Vertex AI.
The code calls the send_message_stream method of the loaded Gemini model.
The code uses Gemini's ability to understand prompts and have a stateful chat conversation.
Try it yourself! Experiment with different prompts to explore Gemini's capabilities.
Click Check my progress to verify the objective.
Send the text prompt requests to Gen AI and receive a chat response
Congratulations!
You have completed the lab! Congratulations!!
Copyright 2025 Google LLC. All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.
실습에는 시간 제한이 있으며 일시중지 기능이 없습니다. 실습을 종료하면 처음부터 다시 시작해야 합니다.
화면 왼쪽 상단에서 실습 시작을 클릭하여 시작합니다.
시크릿 브라우징 사용
실습에 입력한 사용자 이름과 비밀번호를 복사합니다.
비공개 모드에서 콘솔 열기를 클릭합니다.
콘솔에 로그인
실습 사용자 인증 정보를 사용하여
로그인합니다. 다른 사용자 인증 정보를 사용하면 오류가 발생하거나 요금이 부과될 수 있습니다.
약관에 동의하고 리소스 복구 페이지를 건너뜁니다.
실습을 완료했거나 다시 시작하려고 하는 경우가 아니면 실습 종료를 클릭하지 마세요. 이 버튼을 클릭하면 작업 내용이 지워지고 프로젝트가 삭제됩니다.
현재 이 콘텐츠를 이용할 수 없습니다
이용할 수 있게 되면 이메일로 알려드리겠습니다.
감사합니다
이용할 수 있게 되면 이메일로 알려드리겠습니다.
한 번에 실습 1개만 가능
모든 기존 실습을 종료하고 이 실습을 시작할지 확인하세요.
시크릿 브라우징을 사용하여 실습 실행하기
이 실습을 실행하려면 시크릿 모드 또는 시크릿 브라우저 창을 사용하세요. 개인 계정과 학생 계정 간의 충돌로 개인 계정에 추가 요금이 발생하는 일을 방지해 줍니다.
In this lab, you will learn how to use Google's Vertex AI SDK to interact with the powerful Gemini generative AI model, enabling you to send text based chat prompts as an input and receive personalized streaming and non-streaming chat responses.