On-demand activities

Google Cloud 根據您的需求規劃了全方位的課程內容,內含超過 980 項學習活動,並涵蓋多種活動型態,您可自由選擇。您可以選擇簡短的個別研究室,或是包含影片、文件、研究室和測驗的多單元課程。在研究室中,您可以透過臨時憑證實際使用雲端資源,直接累積 Google Cloud 實作經驗。完成課程可獲得徽章,讓您輕鬆掌握、追蹤及評估自己的 Google Cloud 學習成果!

过滤条件
全部清除
  • Badge
  • 格式
  • 语言

1201 条结果
  1. 实验 精选

    Navigate Dataplex

    Use dataplex to identify data sources in BigQuery and Dataproc

  2. 实验 精选

    HTTP Google Cloud Functions in Go

    In this lab you'll build an HTTP Cloud Function in Go.

  3. 实验 精选

    Stream Processing with Cloud Pub/Sub and Dataflow: Qwik Start

    This quickstart shows you how to use Dataflow to read messages published to a Pub/Sub topic, window (or group) the messages by timestamp, and Write the messages to Cloud Storage.

  4. 实验 精选

    Prepare Data for ML APIs on Google Cloud:挑戰研究室

    完成「Prepare Data for ML APIs on Google Cloud」課程的研究室後,您可以透過這個挑戰研究室檢測所學的技能與知識。進行這個研究室前,請先熟悉研究室的內容。

  5. 实验 精选

    Use Vertex AI Studio for Healthcare

    In this lab, you will learn how to use Vertex AI Studio to create prompts and conversations with Gemini's multimodal capabilities in a healthcare context.

  6. 实验 精选

    搭配使用 SQL,運用 Gemini 分析顧客評論

    瞭解如何搭配使用 BigQuery 機器學習和遠端模型 (Gemini),以 SQL 分析顧客評論。

  7. 实验 精选

    在 TensorFlow 中使用 MinDiff 消除偏誤

    本實驗室將協助您瞭解如何透過 TensorFlow Model Remediation 程式庫,使用 MinDiff 技巧消除偏誤。

  8. 实验 精选

    Fraud Detection on Financial Transactions with Machine Learning on Google Cloud

    Explore financial transactions data for fraud analysis, apply feature engineering and machine learning techniques to detect fraudulent activities using BigQuery ML.

  9. 实验 精选

    Build an LLM and RAG-based Chat Application with AlloyDB and Vertex AI

    In this lab, you create a chat application that uses Retrieval Augmented Generation, or RAG, to augment prompts with data retrieved from AlloyDB.

  10. 实验 精选

    Create Text Embeddings for a Vector Store using LangChain

    In this lab, you learn how to use LangChain to store documents as embeddings in a vector store. You will use the LangChain framework to split a set of documents into chunks, vectorize (embed) each chunk and then store the embeddings in a vector database.