Markos Muche
成为会员时间:2022
成为会员时间:2022
本课程将向您介绍注意力机制,这是一种强大的技术,可令神经网络专注于输入序列的特定部分。您将了解注意力的工作原理,以及如何使用它来提高各种机器学习任务的性能,包括机器翻译、文本摘要和问题解答。
本课程向您介绍扩散模型。这类机器学习模型最近在图像生成领域展现出了巨大潜力。扩散模型的灵感来源于物理学,特别是热力学。过去几年内,扩散模型成为热门研究主题并在整个行业开始流行。Google Cloud 上许多先进的图像生成模型和工具都是以扩散模型为基础构建的。本课程向您介绍扩散模型背后的理论,以及如何在 Vertex AI 上训练和部署此类模型。
This introductory course explores the basics of data analysis, including collection, storage, exploration, visualization, and sharing. This course also introduces Google Cloud's data analytics tools and services. Through video lectures, demos, quizzes, and hands-on labs, this course demonstrates how to go from raw data to impactful visualizations and dashboards. Whether you already work with data and want to learn how to be successful on Google Cloud, or you’re looking to progress in your career, this course will help you get started.
Complete the introductory Build a Data Mesh with Dataplex skill badge to demonstrate skills in the following: building a data mesh with Dataplex to facilitate data security, governance, and discovery on Google Cloud. You practice and test your skills in tagging assets, assigning IAM roles, and assessing data quality in Dataplex. A skill badge is an exclusive digital badge issued by Google Cloud in recognition of your proficiency with Google Cloud products and services and tests your ability to apply your knowledge in an interactive hands-on environment. Complete this Skill Badge, and the final assessment challenge lab, to receive a digital badge that you can share with your network.
In this second installment of the Dataflow course series, we are going to be diving deeper on developing pipelines using the Beam SDK. We start with a review of Apache Beam concepts. Next, we discuss processing streaming data using windows, watermarks and triggers. We then cover options for sources and sinks in your pipelines, schemas to express your structured data, and how to do stateful transformations using State and Timer APIs. We move onto reviewing best practices that help maximize your pipeline performance. Towards the end of the course, we introduce SQL and Dataframes to represent your business logic in Beam and how to iteratively develop pipelines using Beam notebooks.
This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.
Enterprise data sharing made easy with Dataplex and Analytics Hub Learn how to share data securely in your lakehouse with minimized data duplication and more data governance through Dataplex and Analytics Hub - enterprise data management made easy. Creating Data Pipelines with Data Fusion In this session, we will explore using Data Fusion to create code-free point and click pipelines that can ETL high-volumes of data with support for popular data sources, including file systems and object stores, relational and NoSQL databases, and SaaS systems.
The third course in this course series is Achieving Advanced Insights with BigQuery. Here we will build on your growing knowledge of SQL as we dive into advanced functions and how to break apart a complex query into manageable steps. We will cover the internal architecture of BigQuery (column-based sharded storage) and advanced SQL topics like nested and repeated fields through the use of Arrays and Structs. Lastly we will dive into optimizing your queries for performance and how you can secure your data through authorized views. After completing this course, enroll in the Applying Machine Learning to your Data with Google Cloud course.
This course covers BigQuery fundamentals for professionals who are familiar with SQL-based cloud data warehouses in Redshift and want to begin working in BigQuery. Through interactive lecture content and hands-on labs, you learn how to provision resources, create and share data assets, ingest data, and optimize query performance in BigQuery. Drawing upon your knowledge of Redshift, you also learn about similarities and differences between Redshift and BigQuery to help you get started with data warehouses in BigQuery. After this course, you can continue your BigQuery journey by completing the skill badge quest titled Build and Optimize Data Warehouses with BigQuery.
This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.
完成中级技能徽章课程利用 BigQuery ML 处理预测模型的工程师数据, 展示自己在以下方面的技能:利用 Dataprep by Trifacta 构建 BigQuery 数据转换流水线; 利用 Cloud Storage、Dataflow 和 BigQuery 构建提取、转换和加载 (ETL) 工作流; 利用 BigQuery ML 构建机器学习模型; 以及利用 Cloud Composer 跨多个位置复制数据。 技能徽章是由 Google Cloud 颁发的专属数字徽章,旨在认可 您对 Google Cloud 产品与服务的熟练度;您需要在 交互式实操环境中参加考核,证明自己运用所学知识的能力后才能获得此徽章。完成技能徽章课程和 作为最终评估的实验室挑战赛,即可获得数字徽章, 在您的人际圈中炫出自己的技能。
完成中级技能徽章课程使用 BigQuery 构建数据仓库,展示以下技能: 联接数据以创建新表、排查联接故障、使用并集附加数据、创建日期分区表, 以及在 BigQuery 中使用 JSON、数组和结构体。 技能徽章是 Google Cloud 颁发的专属数字徽章, 旨在认可您在 Google Cloud 产品与服务方面的熟练度; 您需要在交互式实操环境中参加考核,证明自己运用所学知识的能力后 才能获得。完成此技能徽章课程和作为最终评估的实验室挑战赛, 获得数字徽章,在您的人际圈中炫出自己的技能。
完成入门级技能徽章课程在 Google Cloud 上为机器学习 API 准备数据,展示以下技能: 使用 Dataprep by Trifacta 清理数据、在 Dataflow 中运行数据流水线、在 Dataproc 中创建集群和运行 Apache Spark 作业,以及调用机器学习 API,包括 Cloud Natural Language API、Google Cloud Speech-to-Text API 和 Video Intelligence API。 技能徽章是由 Google Cloud 颁发的专属数字徽章,旨在认可您在 Google Cloud 产品与服务方面的熟练度; 您需要在交互式实操环境中参加考核,证明自己运用所学知识的能力后才能获得。完成此技能徽章课程和作为最终评估的实验室挑战赛, 获得技能徽章,在您的人际圈中炫出自己的技能。
Incorporating machine learning into data pipelines increases the ability to extract insights from data. This course covers ways machine learning can be included in data pipelines on Google Cloud. For little to no customization, this course covers AutoML. For more tailored machine learning capabilities, this course introduces Notebooks and BigQuery machine learning (BigQuery ML). Also, this course covers how to productionalize machine learning solutions by using Vertex AI.
Processing streaming data is becoming increasingly popular as streaming enables businesses to get real-time metrics on business operations. This course covers how to build streaming data pipelines on Google Cloud. Pub/Sub is described for handling incoming streaming data. The course also covers how to apply aggregations and transformations to streaming data using Dataflow, and how to store processed records to BigQuery or Bigtable for analysis. Learners get hands-on experience building streaming data pipeline components on Google Cloud by using QwikLabs.
Data pipelines typically fall under one of the Extract and Load (EL), Extract, Load and Transform (ELT) or Extract, Transform and Load (ETL) paradigms. This course describes which paradigm should be used and when for batch data. Furthermore, this course covers several technologies on Google Cloud for data transformation including BigQuery, executing Spark on Dataproc, pipeline graphs in Cloud Data Fusion and serverless data processing with Dataflow. Learners get hands-on experience building data pipeline components on Google Cloud using Qwiklabs.
Earn a skill badge by completing the Set Up a Google Cloud Network course, where you will learn how to perform basic networking tasks on Google Cloud Platform - create a custom network, add subnets firewall rules, then create VMs and test the latency when they communicate with each other. A skill badge is an exclusive digital badge issued by Google Cloud in recognition of your proficiency with Google Cloud products and services and tests your ability to apply your knowledge in an interactive hands-on environment. Complete the skill badge, and final assessment challenge lab, to receive a digital badge that you can share with your network.
The two key components of any data pipeline are data lakes and warehouses. This course highlights use-cases for each type of storage and dives into the available data lake and warehouse solutions on Google Cloud in technical detail. Also, this course describes the role of a data engineer, the benefits of a successful data pipeline to business operations, and examines why data engineering should be done in a cloud environment. This is the first course of the Data Engineering on Google Cloud series. After completing this course, enroll in the Building Batch Data Pipelines on Google Cloud course.
This course introduces the Google Cloud big data and machine learning products and services that support the data-to-AI lifecycle. It explores the processes, challenges, and benefits of building a big data pipeline and machine learning models with Vertex AI on Google Cloud.
This course helps learners create a study plan for the PCA (Professional Cloud Architect) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.
Want to scale your data analysis efforts without managing database hardware? Learn the best practices for querying and getting insights from your data warehouse with this interactive series of BigQuery labs. BigQuery is Google's fully managed, NoOps, low cost analytics database. With BigQuery you can query terabytes and terabytes of data without having any infrastructure to manage or needing a database administrator. BigQuery uses SQL and can take advantage of the pay-as-you-go model. BigQuery allows you to focus on analyzing data to find meaningful insights.
欢迎学习“Google Kubernetes Engine 使用入门”课程。Kubernetes 是位于应用和硬件基础架构之间的软件层,如果您对 Kubernetes 感兴趣,那就来对地方了!Google Kubernetes Engine 将 Kubernetes 作为 Google Cloud 上的代管式服务提供给您使用。 本课程的目标是介绍 Google Kubernetes Engine(通常称为 GKE)的基础知识,以及将应用容器化并在 Google Cloud 中运行的方法。本课程首先介绍 Google Cloud 的基础知识,然后概述容器、Kubernetes、Kubernetes 架构以及 Kubernetes 操作。
本课程指导学员运用久经考验的设计模式在 Google Cloud 上构建高度可靠且高效的解决方案。它是“Google Compute Engine 架构设计”或“Google Kubernetes Engine 架构设计”课程的延续,并假定您有使用其中任何一门课程所涵盖技术的实践经验。通过一系列演示、设计活动和动手实验,学员可以了解如何定义及平衡业务要求和技术要求,以便设计可靠性和可用性高、安全且经济实惠的 Google Cloud 部署。
这门自助式速成课程向学员介绍 Google Cloud 提供的灵活全面的基础架构和平台服务,着重介绍了 Compute Engine。学员将通过一系列视频讲座、演示和动手实验,探索和部署各种解决方案元素,包括网络、系统和应用服务等基础架构组件。本课程的内容还包括如何部署实用的解决方案,包括客户提供的加密密钥、安全和访问权限管理、配额和结算,以及资源监控。
这门自助式速成课程向学员介绍 Google Cloud 提供的灵活全面的基础架构和平台服务,其中着重介绍了 Compute Engine。学员将通过一系列视频讲座、演示和动手实验,探索和部署各种解决方案元素,包括网络、虚拟机和应用服务等基础架构组件。您将学习如何通过控制台和 Cloud Shell 使用 Google Cloud。您还将了解云架构师角色、基础架构设计方法以及虚拟网络配置和虚拟私有云 (VPC)、项目、网络、子网、IP 地址、路由及防火墙规则。
“Google Cloud 基础知识:核心基础架构”介绍在使用 Google Cloud 时会遇到的重要概念和术语。本课程通过视频和实操实验来介绍并比较 Google Cloud 的多种计算和存储服务,并提供重要的资源和政策管理工具。
This course offers hands-on practice with migrating MySQL data to Cloud SQL using Database Migration Service. You start with an introductory lab that briefly reviews how to get started with Cloud SQL for MySQL, including how to connect to Cloud SQL instances using the Cloud Console. Then, you continue with two labs focused on migrating MySQL databases to Cloud SQL using different job types and connectivity options available in Database Migration Service. The course ends with a lab on migrating MySQL user data when running Database Migration Service jobs.
Complete the intermediate Build Infrastructure with Terraform on Google Cloud skill badge to demonstrate skills in the following: Infrastructure as Code (IaC) principles using Terraform, provisioning and managing Google Cloud resources with Terraform configurations, effective state management (local and remote), and modularizing Terraform code for reusability and organization. A skill badge is an exclusive digital badge issued by Google Cloud in recognition of your proficiency with Google Cloud products and services and tests your ability to apply your knowledge in an interactive hands-on environment. Complete this skill badge course and the final assessment challenge lab to receive a skill badge that you can share with your network.
Earn a skill badge by completing the Develop your Google Cloud Network course, where you learn multiple ways to deploy and monitor applications including how to: explore IAM rols and add/remove project access, create VPC networks, deploy and monitor Compute Engine VMs, write SQL queries, deploy and monitor VMs in Compute Engine, and deploy applications using Kubernetes with multiple deployment approaches. A skill badge is an exclusivedigital badge issued by Google Cloud in recognition of your proficiency with Google Cloud products and services and tests your ability to apply your knowledge in an interactive hands-on environment. Complete this skill badge, and the final assessment challenge lab, to receive a skill badge that you can share with your network.
本课程介绍 Google Cloud 中的 AI 和机器学习 (ML) 服务,这些服务可构建预测式和生成式 AI 项目。本课程探讨从数据到 AI 的整个生命周期中可用的技术、产品和工具,包括 AI 基础、开发和解决方案。通过引人入胜的学习体验和实操练习,本课程可帮助数据科学家、AI 开发者和机器学习工程师提升技能和知识水平。
如果您是新手云开发人员,希望在GCP Essentials之外寻求动手实践,那么此任务适合您。通过深入研究Cloud Storage和其他关键应用程序服务(如Stackdriver和Cloud Functions)的实验室,您将获得实践经验。通过执行此任务,您将开发适用于任何GCP计划的宝贵技能。 1分钟的视频向您介绍这些实验室的关键概念。
完成入门级在 Compute Engine 上实现负载均衡技能徽章课程,展示自己在以下方面的技能: 编写 gcloud 命令和使用 Cloud Shell,在 Compute Engine 中创建和部署虚拟机, 以及配置网络和 HTTP 负载均衡器。 技能徽章是由 Google Cloud 颁发的专属数字徽章, 旨在认可您在 Google Cloud 产品与服务方面的熟练度; 该课程会检验您在交互式实操环境中运用所学知识的 能力。完成此技能徽章课程和作为最终评估的实验室挑战赛, 即可获得技能徽章,并在您的圈子中秀一秀。
在此入门级挑战任务中,您可以使用 Google Cloud Platform 的基本工具和服务,开展真枪实弹的操作实训。“GCP 基本功能”是我们为 Google Cloud 学员推荐的第一项挑战任务。云知识储备微乎其微甚至零基础?不用担心!这项挑战任务会为您提供真枪实弹的实操经验,助您快速上手 GCP 项目。无论是要编写 Cloud Shell 命令还是部署您的第一台虚拟机,亦或是通过负载平衡机制或在 Kubernetes Engine 上运行应用,都可以通过“GCP 基本功能”了解该平台的基本功能之精要。点此观看 1 分钟视频,了解每个实验涉及的主要概念。