Nguyen Minh Nhat
成为会员时间:2022
成为会员时间:2022
这是一节入门级微课程,旨在解释什么是生成式 AI、它的用途以及与传统机器学习方法的区别。该课程还介绍了可以帮助您开发自己的生成式 AI 应用的各种 Google 工具。
In the last installment of the Dataflow course series, we will introduce the components of the Dataflow operational model. We will examine tools and techniques for troubleshooting and optimizing pipeline performance. We will then review testing, deployment, and reliability best practices for Dataflow pipelines. We will conclude with a review of Templates, which makes it easy to scale Dataflow pipelines to organizations with hundreds of users. These lessons will help ensure that your data platform is stable and resilient to unanticipated circumstances.
In this second installment of the Dataflow course series, we are going to be diving deeper on developing pipelines using the Beam SDK. We start with a review of Apache Beam concepts. Next, we discuss processing streaming data using windows, watermarks and triggers. We then cover options for sources and sinks in your pipelines, schemas to express your structured data, and how to do stateful transformations using State and Timer APIs. We move onto reviewing best practices that help maximize your pipeline performance. Towards the end of the course, we introduce SQL and Dataframes to represent your business logic in Beam and how to iteratively develop pipelines using Beam notebooks.
Incorporating machine learning into data pipelines increases the ability to extract insights from data. This course covers ways machine learning can be included in data pipelines on Google Cloud. For little to no customization, this course covers AutoML. For more tailored machine learning capabilities, this course introduces Notebooks and BigQuery machine learning (BigQuery ML). Also, this course covers how to productionalize machine learning solutions by using Vertex AI.
Processing streaming data is becoming increasingly popular as streaming enables businesses to get real-time metrics on business operations. This course covers how to build streaming data pipelines on Google Cloud. Pub/Sub is described for handling incoming streaming data. The course also covers how to apply aggregations and transformations to streaming data using Dataflow, and how to store processed records to BigQuery or Bigtable for analysis. Learners get hands-on experience building streaming data pipeline components on Google Cloud by using QwikLabs.
Data pipelines typically fall under one of the Extract and Load (EL), Extract, Load and Transform (ELT) or Extract, Transform and Load (ETL) paradigms. This course describes which paradigm should be used and when for batch data. Furthermore, this course covers several technologies on Google Cloud for data transformation including BigQuery, executing Spark on Dataproc, pipeline graphs in Cloud Data Fusion and serverless data processing with Dataflow. Learners get hands-on experience building data pipeline components on Google Cloud using Qwiklabs.
The two key components of any data pipeline are data lakes and warehouses. This course highlights use-cases for each type of storage and dives into the available data lake and warehouse solutions on Google Cloud in technical detail. Also, this course describes the role of a data engineer, the benefits of a successful data pipeline to business operations, and examines why data engineering should be done in a cloud environment. This is the first course of the Data Engineering on Google Cloud series. After completing this course, enroll in the Building Batch Data Pipelines on Google Cloud course.
This course introduces the Google Cloud big data and machine learning products and services that support the data-to-AI lifecycle. It explores the processes, challenges, and benefits of building a big data pipeline and machine learning models with Vertex AI on Google Cloud.
This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.
This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.
The Google Cloud Computing Foundations courses are for individuals with little to no background or experience in cloud computing. They provide an overview of concepts central to cloud basics, big data, and machine learning, and where and how Google Cloud fits in. By the end of the series of courses, learners will be able to articulate these concepts and demonstrate some hands-on skills. The courses should be completed in the following order: 1. Google Cloud Computing Foundations: Cloud Computing Fundamentals 2. Google Cloud Computing Foundations: Infrastructure in Google Cloud 3. Google Cloud Computing Foundations: Networking and Security in Google Cloud 4. Google Cloud Computing Foundations: Data, ML, and AI in Google Cloud
完成“在 Google Cloud 上设置应用开发环境”课程,赢取技能徽章;通过该课程,您将了解如何使用以下技术的基本功能来构建和连接以存储为中心的云基础设施: Cloud Storage、Identity and Access Management、Cloud Functions 和 Pub/Sub。 技能徽章是由 Google Cloud 颁发的专属数字徽章, 旨在认可您在 Google Cloud 产品与服务方面的熟练度;您需要在交互式实操环境中参加考核,证明自己运用所学知识的能力后才能获得。完成此技能徽章课程和作为最终评估的实验室挑战赛,获得技能徽章,在您的人际圈中炫出自己的技能。
Organizations of all sizes are embracing the power and flexibility of the cloud to transform how they operate. However, managing and scaling cloud resources effectively can be a complex task. Scaling with Google Cloud Operations explores the fundamental concepts of modern operations, reliability, and resilience in the cloud, and how Google Cloud can help support these efforts. Part of the Cloud Digital Leader learning path, this course aims to help individuals grow in their role and build the future of their business.
Many traditional enterprises use legacy systems and applications that often struggle to achieve the scale and speed needed to meet modern customer expectations. Business leaders and IT decision makers constantly have to choose between maintenance of legacy systems and investing in innovative new products and services. This course explores the challenges of an outdated IT infrastructure and how businesses can modernize it using cloud technology. It begins by exploring the different compute options available in the cloud and the benefits of each, before turning to application modernization and Application Programming Interfaces (APIs). The course also considers a range of Google Cloud solutions that can help businesses to better develop and manage their systems, such as Compute Engine, App Engine, and Apigee. This is the third course in the Cloud Digital Leader series. At the end of this course, enroll in the Understanding Google Cloud Security and Operations course.
In this course you will learn how you to harness serious Google Cloud power and infrastructure. The hands-on labs will give you use cases and you will be tasked with implementing scaling practices utilized by Google’s very own Solutions Architecture team. From developing enterprise grade load balancing and autoscaling, to building continuous delivery pipelines, Google Cloud Solutions I: Scaling your Infrastructure will teach you best practices for taking your Google Cloud projects to the next level.
完成在 Cloud Run 上开发无服务器应用技能徽章中级课程, 展示您在以下方面的技能:集成 Cloud Run 与 Cloud Storage 以管理数据, 使用 Cloud Run 和 Pub/Sub 设计弹性异步系统架构, 构建依托 Cloud Run 技术的 REST API 网关,以及在 Cloud Run 上构建和部署服务。 技能徽章是由 Google Cloud 颁发的专属数字徽章,旨在认可 您在 Google Cloud 产品与服务方面的熟练度。您需要在 交互式实操环境中参加考核,证明自己运用所学知识的能力后才能获得此徽章。完成此技能 徽章课程和作为最终评估的实验室挑战赛,获得技能徽章, 并在您的社交圈中秀一秀自己的成就。
完成中级技能徽章课程利用 BigQuery ML 处理预测模型的工程师数据, 展示自己在以下方面的技能:利用 Dataprep by Trifacta 构建 BigQuery 数据转换流水线; 利用 Cloud Storage、Dataflow 和 BigQuery 构建提取、转换和加载 (ETL) 工作流; 利用 BigQuery ML 构建机器学习模型; 以及利用 Cloud Composer 跨多个位置复制数据。 技能徽章是由 Google Cloud 颁发的专属数字徽章,旨在认可 您对 Google Cloud 产品与服务的熟练度;您需要在 交互式实操环境中参加考核,证明自己运用所学知识的能力后才能获得此徽章。完成技能徽章课程和 作为最终评估的实验室挑战赛,即可获得数字徽章, 在您的人际圈中炫出自己的技能。
完成在 Google Cloud 上使用 Terraform 构建基础设施技能徽章中级课程, 展示您在以下方面的技能:在使用 Terraform 时遵循基础设施即代码 (IaC) 原则;利用 Terraform 配置 来预配和管理 Google Cloud 资源;管理有效状态(本地和远程); 以及将 Terraform 代码模块化,以方便重复使用和整理。 技能徽章 是由 Google Cloud 颁发的专属数字徽章,旨在认可 您在 Google Cloud 产品与服务方面的熟练度;您需要在 交互式实操环境中参加考核,证明自己运用所学知识的能力后才能获得。完成此技能 徽章课程和作为最终评估的实验室挑战赛,获得技能徽章, 在您的人际圈中炫出自己的技能。
完成中级技能徽章课程使用 BigQuery 构建数据仓库,展示以下技能: 联接数据以创建新表、排查联接故障、使用并集附加数据、创建日期分区表, 以及在 BigQuery 中使用 JSON、数组和结构体。 技能徽章是 Google Cloud 颁发的专属数字徽章, 旨在认可您在 Google Cloud 产品与服务方面的熟练度; 您需要在交互式实操环境中参加考核,证明自己运用所学知识的能力后 才能获得。完成此技能徽章课程和作为最终评估的实验室挑战赛, 获得数字徽章,在您的人际圈中炫出自己的技能。