Lex Xai
Member since 2022
Member since 2022
This course is an introduction to Vertex AI Notebooks, which are Jupyter notebook-based environments that provide a unified platform for the entire machine learning workflow, from data preparation to model deployment and monitoring. The course covers the following topics: (1) The different types of Vertex AI Notebooks and their features and (2) How to create and manage Vertex AI Notebooks.
Complete the introductory Prepare Data for ML APIs on Google Cloud skill badge to demonstrate skills in the following: cleaning data with Dataprep by Trifacta, running data pipelines in Dataflow, creating clusters and running Apache Spark jobs in Dataproc, and calling ML APIs including the Cloud Natural Language API, Google Cloud Speech-to-Text API, and Video Intelligence API. A skill badge is an exclusive digital badge issued by Google Cloud in recognition of your proficiency with Google Cloud products and services and tests your ability to apply your knowledge in an interactive hands-on environment. Complete this skill badge course, and the final assessment challenge lab, to receive a skill badge that you can share with your network.
This course introduces concepts of AI interpretability and transparency. It discusses the importance of AI transparency for developers and engineers. It explores practical methods and tools to help achieve interpretability and transparency in both data and AI models.
This course introduces concepts of responsible AI and AI principles. It covers techniques to practically identify fairness and bias and mitigate bias in AI/ML practices. It explores practical methods and tools to implement Responsible AI best practices using Google Cloud products and open source tools.
This course helps learners create a study plan for the PMLE (Professional Machine Learning Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.
Generative AI applications can create new user experiences that were nearly impossible before the invention of large language models (LLMs). As an application developer, how can you use generative AI to build engaging, powerful apps on Google Cloud? In this course, you'll learn about generative AI applications and how you can use prompt design and retrieval augmented generation (RAG) to build powerful applications using LLMs. You'll learn about a production-ready architecture that can be used for generative AI applications and you'll build an LLM and RAG-based chat application.
This course introduces Vertex AI Vector Search and describes how it can be used to build a search application with large language model (LLM) APIs for embeddings. The course consists of conceptual lessons on vector search and text embeddings, practical demos on how to build vector search on Vertex AI, and a hands-on lab.
Earn the intermediate skill badge by completing the Build and Deploy Machine Learning Solutions with Vertex AI course, where you will learn how to use Google Cloud's Vertex AI platform, AutoML, and custom training services to train, evaluate, tune, explain, and deploy machine learning models. This skill badge course is for professional Data Scientists and Machine Learning Engineers. A skill badge is an exclusive digital badge issued by Google Cloud in recognition of your proficiency with Google Cloud products and services and tests your ability to apply your knowledge in an interactive hands-on environment. Complete this Skill Badge, and the final assessment challenge lab, to receive a digital badge that you can share with your network.
In this course, you will be learning from ML Engineers and Trainers who work with the state-of-the-art development of ML pipelines here at Google Cloud. The first few modules will cover about TensorFlow Extended (or TFX), which is Google’s production machine learning platform based on TensorFlow for management of ML pipelines and metadata. You will learn about pipeline components and pipeline orchestration with TFX. You will also learn how you can automate your pipeline through continuous integration and continuous deployment, and how to manage ML metadata. Then we will change focus to discuss how we can automate and reuse ML pipelines across multiple ML frameworks such as tensorflow, pytorch, scikit learn, and xgboost. You will also learn how to use another tool on Google Cloud, Cloud Composer, to orchestrate your continuous training pipelines. And finally, we will go over how to use MLflow for managing the complete machine learning life cycle.
This course equips machine learning practitioners with the essential tools, techniques, and best practices for evaluating both generative and predictive AI models. Model evaluation is a critical discipline for ensuring that ML systems deliver reliable, accurate, and high-performing results in production. Participants will gain a deep understanding of various evaluation metrics, methodologies, and their appropriate application across different model types and tasks. The course will emphasize the unique challenges posed by generative AI models and provide strategies for tackling them effectively. By leveraging Google Cloud's Vertex AI platform, participants will learn how to implement robust evaluation processes for model selection, optimization, and continuous monitoring.
This course is dedicated to equipping you with the knowledge and tools needed to uncover the unique challenges faced by MLOps teams when deploying and managing Generative AI models, and exploring how Vertex AI empowers AI teams to streamline MLOps processes and achieve success in Generative AI projects.
This is an introductory level micro-learning course that explores what large language models (LLM) are, the use cases where they can be utilized, and how you can use prompt tuning to enhance LLM performance. It also covers Google tools to help you develop your own Gen AI apps.
This is an introductory level microlearning course aimed at explaining what Generative AI is, how it is used, and how it differs from traditional machine learning methods. It also covers Google Tools to help you develop your own Gen AI apps.
This course introduces participants to MLOps tools and best practices for deploying, evaluating, monitoring and operating production ML systems on Google Cloud. MLOps is a discipline focused on the deployment, testing, monitoring, and automation of ML systems in production. Learners will get hands-on practice using Vertex AI Feature Store's streaming ingestion at the SDK layer.
This course introduces participants to MLOps tools and best practices for deploying, evaluating, monitoring and operating production ML systems on Google Cloud. MLOps is a discipline focused on the deployment, testing, monitoring, and automation of ML systems in production. Machine Learning Engineering professionals use tools for continuous improvement and evaluation of deployed models. They work with (or can be) Data Scientists, who develop models, to enable velocity and rigor in deploying the best performing models.
In this course, you apply your knowledge of classification models and embeddings to build a ML pipeline that functions as a recommendation engine. This is the fifth and final course of the Advanced Machine Learning on Google Cloud series.
This course introduces the products and solutions to solve NLP problems on Google Cloud. Additionally, it explores the processes, techniques, and tools to develop an NLP project with neural networks by using Vertex AI and TensorFlow.
This course describes different types of computer vision use cases and then highlights different machine learning strategies for solving these use cases. The strategies vary from experimenting with pre-built ML models through pre-built ML APIs and AutoML Vision to building custom image classifiers using linear models, deep neural network (DNN) models or convolutional neural network (CNN) models. The course shows how to improve a model's accuracy with augmentation, feature extraction, and fine-tuning hyperparameters while trying to avoid overfitting the data. The course also looks at practical issues that arise, for example, when one doesn't have enough data and how to incorporate the latest research findings into different models. Learners will get hands-on practice building and optimizing their own image classification models on a variety of public datasets in the labs they will work on.
This course covers how to implement the various flavors of production ML systems— static, dynamic, and continuous training; static and dynamic inference; and batch and online processing. You delve into TensorFlow abstraction levels, the various options for doing distributed training, and how to write distributed training models with custom estimators. This is the second course of the Advanced Machine Learning on Google Cloud series. After completing this course, enroll in the Image Understanding with TensorFlow on Google Cloud course.
This course takes a real-world approach to the ML Workflow through a case study. An ML team faces several ML business requirements and use cases. The team must understand the tools required for data management and governance and consider the best approach for data preprocessing. The team is presented with three options to build ML models for two use cases. The course explains why they would use AutoML, BigQuery ML, or custom training to achieve their objectives.
This course explores the benefits of using Vertex AI Feature Store, how to improve the accuracy of ML models, and how to find which data columns make the most useful features. This course also includes content and labs on feature engineering using BigQuery ML, Keras, and TensorFlow.
This course covers building ML models with TensorFlow and Keras, improving the accuracy of ML models and writing ML models for scaled use.
The course begins with a discussion about data: how to improve data quality and perform exploratory data analysis. We describe Vertex AI AutoML and how to build, train, and deploy an ML model without writing a single line of code. You will understand the benefits of Big Query ML. We then discuss how to optimize a machine learning (ML) model and how generalization and sampling can help assess the quality of ML models for custom training.
This course introduces the AI and machine learning (ML) offerings on Google Cloud that build both predictive and generative AI projects. It explores the technologies, products, and tools available throughout the data-to-AI life cycle, encompassing AI foundations, development, and solutions. It aims to help data scientists, AI developers, and ML engineers enhance their skills and knowledge through engaging learning experiences and practical hands-on exercises.
Complete the introductory Implement Load Balancing on Compute Engine skill badge to demonstrate skills in the following: writing gcloud commands and using Cloud Shell, creating and deploying virtual machines in Compute Engine, and configuring network and HTTP load balancers. A skill badge is an exclusive digital badge issued by Google Cloud in recognition of your proficiency with Google Cloud products and services and tests your ability to apply your knowledge in an interactive hands-on environment. Complete this skill badge, and the final assessment challenge lab, to receive a skill badge that you can share with your network.
This course equips students to build highly reliable and efficient solutions on Google Cloud using proven design patterns. It is a continuation of the Architecting with Google Compute Engine or Architecting with Google Kubernetes Engine courses and assumes hands-on experience with the technologies covered in either of those courses. Through a combination of presentations, design activities, and hands-on labs, participants learn to define and balance business and technical requirements to design Google Cloud deployments that are highly reliable, highly available, secure, and cost-effective.
In many IT organizations, incentives are not aligned between developers, who strive for agility, and operators, who focus on stability. Site reliability engineering, or SRE, is how Google aligns incentives between development and operations and does mission-critical production support. Adoption of SRE cultural and technical practices can help improve collaboration between the business and IT. This course introduces key practices of Google SRE and the important role IT and business leaders play in the success of SRE organizational adoption.
Google Cloud Fundamentals: Core Infrastructure introduces important concepts and terminology for working with Google Cloud. Through videos and hands-on labs, this course presents and compares many of Google Cloud's computing and storage services, along with important resource and policy management tools.
Earn a skill badge by completing the Set Up an App Dev Environment on Google Cloud course, where you learn how to build and connect storage-centric cloud infrastructure using the basic capabilities of the of the following technologies: Cloud Storage, Identity and Access Management, Cloud Functions, and Pub/Sub. A skill badge is an exclusive digital badge issued by Google Cloud in recognition of your proficiency with Google Cloud products and services and tests your ability to apply your knowledge in an interactive hands-on environment. Complete this skill badge, and the final assessment challenge lab, to receive a skill badge that you can share with your network.
In this introductory-level Quest, you will get hands-on practice with the Google Cloud’s fundamental tools and services. Google Cloud Essentials is the recommended first Quest for the Google Cloud learner - you will come in with little or no prior cloud knowledge, and come out with practical experience that you can apply to your first Google Cloud project. From writing Cloud Shell commands and deploying your first virtual machine, to running applications on Kubernetes Engine or with load balancing, Google Cloud Essentials is a prime introduction to the platform’s basic features. 1-minute videos walk you through key concepts for each lab.