Teilnehmen Anmelden

Francisco Colomer

Mitglied seit 2023

Silver League

30035 Punkte
Badge für Machine Learning in the Enterprise Machine Learning in the Enterprise Earned Sep 29, 2024 EDT
Badge für How Google Does Machine Learning How Google Does Machine Learning Earned Mai 29, 2024 EDT
Badge für Engineer Data for Predictive Modeling with BigQuery ML Engineer Data for Predictive Modeling with BigQuery ML Earned Nov 15, 2023 EST
Badge für Build a Data Warehouse with BigQuery Build a Data Warehouse with BigQuery Earned Nov 14, 2023 EST
Badge für Prepare Data for ML APIs on Google Cloud Prepare Data for ML APIs on Google Cloud Earned Nov 13, 2023 EST
Badge für Serverless Data Processing with Dataflow: Operations Serverless Data Processing with Dataflow: Operations Earned Nov 9, 2023 EST
Badge für Serverless Data Processing with Dataflow: Develop Pipelines Serverless Data Processing with Dataflow: Develop Pipelines Earned Nov 4, 2023 EDT
Badge für Serverless Data Processing with Dataflow: Foundations Serverless Data Processing with Dataflow: Foundations Earned Okt 26, 2023 EDT
Badge für Smart Analytics, Machine Learning, and AI on Google Cloud Smart Analytics, Machine Learning, and AI on Google Cloud Earned Okt 23, 2023 EDT
Badge für Building Batch Data Pipelines on Google Cloud Building Batch Data Pipelines on Google Cloud Earned Okt 22, 2023 EDT
Badge für Building Resilient Streaming Analytics Systems on Google Cloud Building Resilient Streaming Analytics Systems on Google Cloud Earned Okt 20, 2023 EDT
Badge für Modernizing Data Lakes and Data Warehouses with Google Cloud Modernizing Data Lakes and Data Warehouses with Google Cloud Earned Okt 14, 2023 EDT
Badge für Google Cloud Big Data and Machine Learning Fundamentals Google Cloud Big Data and Machine Learning Fundamentals Earned Okt 12, 2023 EDT
Badge für Preparing for your Professional Data Engineer Journey Preparing for your Professional Data Engineer Journey Earned Okt 4, 2023 EDT

This course takes a real-world approach to the ML Workflow through a case study. An ML team faces several ML business requirements and use cases. The team must understand the tools required for data management and governance and consider the best approach for data preprocessing. The team is presented with three options to build ML models for two use cases. The course explains why they would use AutoML, BigQuery ML, or custom training to achieve their objectives.

Weitere Informationen

This course explores what ML is and what problems it can solve. The course also discusses best practices for implementing machine learning. You’re introduced to Vertex AI, a unified platform to quickly build, train, and deploy AutoML machine learning models. The course discusses the five phases of converting a candidate use case to be driven by machine learning, and why it’s important to not skip them. The course ends with recognizing the biases that ML can amplify and how to recognize them.

Weitere Informationen

Mit dem Skill-Logo zum Kurs Engineer Data for Predictive Modeling with BigQuery ML weisen Sie fortgeschrittene Kenntnisse in folgenden Bereichen nach: Erstellen von Pipelines für die Datentransformation nach BigQuery mithilfe von Dataprep von Trifacta; Extrahieren, Transformieren und Laden (ETL) von Workflows mit Cloud Storage, Dataflow und BigQuery; Erstellen von Machine-Learning-Modellen mithilfe von BigQuery ML; standortübergreifendes Kopieren von Daten mit Cloud Composer. Ein Skill-Logo ist ein exklusives digitales Abzeichen, das von Google Cloud ausgestellt wird und Ihre Kenntnisse über Produkte und Dienste von Google Cloud belegt. In diesem Zusammenhang wird auch die Fähigkeit bewertet, Ihr Wissen in einer interaktiven praxisnahen Umgebung anzuwenden. Absolvieren Sie eine kursspezifische Aufgabenreihe und die Challenge-Lab-Prüfung, um ein Skill-Logo zu erhalten, das Sie in Ihrem Netzwerk posten können.

Weitere Informationen

Mit dem Skill-Logo zum Kurs Build a Data Warehouse with BigQuery weisen Sie fortgeschrittene Kenntnisse in folgenden Bereichen nach: Daten zusammenführen, um neue Tabellen zu erstellen, Probleme mit Joins lösen, Daten mit Unions anhängen, nach Daten partitionierte Tabellen erstellen und JSON, Arrays sowie Strukturen in BigQuery nutzen. Ein Skill-Logo ist ein exklusives digitales Abzeichen, das von Google Cloud vergeben wird und Ihre Kenntnisse über unsere Produkte und Dienste belegt. In diesem Zusammenhang wird auch die Fähigkeit bewertet, wie Sie Ihr Wissen in einer praxisnahen Geschäftssituation anwenden. Absolvieren Sie eine kursspezifische Aufgabenreihe und die Challenge-Lab-Prüfung, um ein Skill-Logo zu erhalten, das Sie in Ihrem Netzwerk posten können.

Weitere Informationen

Mit dem Skill-Logo zum Kurs Prepare Data for ML APIs on Google Cloud weisen Sie Grundkenntnisse in folgenden Bereichen nach: Bereinigen von Daten mit Dataprep von Trifacta, Ausführen von Datenpipelines in Dataflow, Erstellen von Clustern und Ausführen von Apache Spark-Jobs in Dataproc sowie Aufrufen von ML-APIs, einschließlich der Cloud Natural Language API, Cloud Speech-to-Text API und Video Intelligence API. Ein Skill-Logo ist ein exklusives digitales Abzeichen, das von Google Cloud ausgestellt wird und Ihre Kenntnisse über unsere Produkte und Dienste belegt. In diesem Zusammenhang wird auch die Fähigkeit bewertet, Ihr Wissen in einer interaktiven praxisnahen Geschäftssituation anzuwenden. Absolvieren Sie eine kursspezifische Aufgabenreihe und die Challenge-Lab-Prüfung, um ein Skill-Logo zu erhalten, das Sie in Ihrem Netzwerk posten können.

Weitere Informationen

In the last installment of the Dataflow course series, we will introduce the components of the Dataflow operational model. We will examine tools and techniques for troubleshooting and optimizing pipeline performance. We will then review testing, deployment, and reliability best practices for Dataflow pipelines. We will conclude with a review of Templates, which makes it easy to scale Dataflow pipelines to organizations with hundreds of users. These lessons will help ensure that your data platform is stable and resilient to unanticipated circumstances.

Weitere Informationen

In this second installment of the Dataflow course series, we are going to be diving deeper on developing pipelines using the Beam SDK. We start with a review of Apache Beam concepts. Next, we discuss processing streaming data using windows, watermarks and triggers. We then cover options for sources and sinks in your pipelines, schemas to express your structured data, and how to do stateful transformations using State and Timer APIs. We move onto reviewing best practices that help maximize your pipeline performance. Towards the end of the course, we introduce SQL and Dataframes to represent your business logic in Beam and how to iteratively develop pipelines using Beam notebooks.

Weitere Informationen

This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.

Weitere Informationen

Incorporating machine learning into data pipelines increases the ability to extract insights from data. This course covers ways machine learning can be included in data pipelines on Google Cloud. For little to no customization, this course covers AutoML. For more tailored machine learning capabilities, this course introduces Notebooks and BigQuery machine learning (BigQuery ML). Also, this course covers how to productionalize machine learning solutions by using Vertex AI.

Weitere Informationen

Data pipelines typically fall under one of the Extract and Load (EL), Extract, Load and Transform (ELT) or Extract, Transform and Load (ETL) paradigms. This course describes which paradigm should be used and when for batch data. Furthermore, this course covers several technologies on Google Cloud for data transformation including BigQuery, executing Spark on Dataproc, pipeline graphs in Cloud Data Fusion and serverless data processing with Dataflow. Learners get hands-on experience building data pipeline components on Google Cloud using Qwiklabs.

Weitere Informationen

Processing streaming data is becoming increasingly popular as streaming enables businesses to get real-time metrics on business operations. This course covers how to build streaming data pipelines on Google Cloud. Pub/Sub is described for handling incoming streaming data. The course also covers how to apply aggregations and transformations to streaming data using Dataflow, and how to store processed records to BigQuery or Bigtable for analysis. Learners get hands-on experience building streaming data pipeline components on Google Cloud by using QwikLabs.

Weitere Informationen

The two key components of any data pipeline are data lakes and warehouses. This course highlights use-cases for each type of storage and dives into the available data lake and warehouse solutions on Google Cloud in technical detail. Also, this course describes the role of a data engineer, the benefits of a successful data pipeline to business operations, and examines why data engineering should be done in a cloud environment. This is the first course of the Data Engineering on Google Cloud series. After completing this course, enroll in the Building Batch Data Pipelines on Google Cloud course.

Weitere Informationen

This course introduces the Google Cloud big data and machine learning products and services that support the data-to-AI lifecycle. It explores the processes, challenges, and benefits of building a big data pipeline and machine learning models with Vertex AI on Google Cloud.

Weitere Informationen

This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.

Weitere Informationen