
准备工作
- 实验会创建一个 Google Cloud 项目和一些资源,供您使用限定的一段时间
- 实验有时间限制,并且没有暂停功能。如果您中途结束实验,则必须重新开始。
- 在屏幕左上角,点击开始实验即可开始
Use Terraform to set up the necessary infrastructure
/ 50
View Logs in BigQuery
/ 50
Cloud Logging can be used aggregate logs from all Google Cloud resources, as well as any custom resources on other platforms, to allow for one centralized store for all logs and metrics. Logs are aggregated and then viewable within the provided Cloud Logging UI. They can also be exported to Sinks to support more specialized of use cases. Currently, Cloud Logging supports exporting to the following sinks:
In this lab you will deploy a sample application to Kubernetes Engine that forwards log events to Cloud Logging using Terraform, a declarative Infrastructure as Code tool that enables configuration files to automate the deployment and evolution of infrastructure in the cloud. The configuration will also create a Cloud Storage bucket and a BigQuery dataset for exporting log data to.
This lab was created by GKE Helmsman engineers to give you a better understanding of Cloud Logging. You can view this demo by running gsutil cp -r gs://spls/gke-binary-auth/* .
and cd gke-binary-auth-demo
command in cloud shell. We encourage any and all to contribute to our assets!
The Terraform configurations are going to build a Kubernetes Engine cluster that will generate logs and metrics that can be ingested by Stackdriver. The scripts will also build out Logging Export Sinks for Cloud Storage, BigQuery, and Cloud Pub/Sub.
The diagram of how this will look along with the data flow can be seen in the following graphic:
Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources are made available to you.
This hands-on lab lets you do the lab activities in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
Click the Start Lab button. If you need to pay for the lab, a dialog opens for you to select your payment method. On the left is the Lab Details pane with the following:
Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).
The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
If necessary, copy the Username below and paste it into the Sign in dialog.
You can also find the Username in the Lab Details pane.
Click Next.
Copy the Password below and paste it into the Welcome dialog.
You can also find the Password in the Lab Details pane.
Click Next.
Click through the subsequent pages:
After a few moments, the Google Cloud console opens in this tab.
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
Click Activate Cloud Shell at the top of the Google Cloud console.
Click through the following windows:
When you are connected, you are already authenticated, and the project is set to your Project_ID,
gcloud
is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
Output:
Output:
gcloud
, in Google Cloud, refer to the gcloud CLI overview guide.
Certain Compute Engine resources live in regions and zones. A region is a specific geographical location where you can run your resources. Each region has one or more zones.
Run the following to set a region and zone for your lab (you can use the region/zone that's best for you):
Following the principles of Infrastructure as Code and Immutable Infrastructure, Terraform supports the writing of declarative descriptions of the desired state of infrastructure. When the descriptor is applied, Terraform uses Google Cloud APIs to provision and update resources to match.
Terraform compares the desired state with the current state so incremental changes can be made without deleting everything and starting over. For instance, Terraform can build out Google Cloud projects and compute instances, etc., even set up a Kubernetes Engine cluster and deploy applications to it. When requirements change, the descriptor can be updated and Terraform will adjust the cloud infrastructure accordingly.
This lab will start up a Kubernetes Engine cluster and deploy a simple sample application to it. By default, Kubernetes Engine clusters in Google Cloud are provisioned with a pre-configured Fluentd-based collector that forwards logs to Cloud Logging. Interacting with the sample app will produce logs that are visible in the Cloud Logging and other log event sinks.
Remove the provider version for the Terraform from the provider.tf
script file.
Click Open Editor, then click Explorer > Open folder, and then click Ok.
From the left-hand menu, open the file /gke-logging-sinks-demo/terraform/provider.tf
.
Update the file with the terraform
block and set the version to ~> 2.19.0. After modification your provider.tf
script file should look like:
There are three Terraform files provided with this lab example.
The first one, main.tf
, is the starting point for Terraform. It describes the features that will be used, the resources that will be manipulated, and the outputs that will result.
The second file is provider.tf
, which indicates which cloud provider and version will be the target of the Terraform commands--in this case Google Cloud.
The final file is variables.tf
, which contains a list of variables that are used as inputs into Terraform. Any variables referenced in the main.tf
that do not have defaults configured in variables.tf
will result in prompts to the user at runtime.
You will make one small change to main.tf
. From the left-hand menu, open the file /gke-logging-sinks-demo/terraform/main.tf
.
Scroll down to line 110 and find the "Create the Stackdriver Export Sink for Cloud Storage GKE Notifications" section.
Change the filter's resource.type
from container to k8s_container.
Do the same for the bigquery-sink on line 119. Ensure that these two export sync sections look like the following before moving on:
Save and close the file.
Now run the following command to build out the executable environment using the make
command:
Click Check my progress to verify your performed task. If you have successfully deployed necessary infrastructure with Terraform, you will see an assessment score.
If no errors are displayed during deployment, after a few minutes you should see your Kubernetes Engine cluster in the Cloud Console.
Go to Navigation menu > Kubernetes Engine > Clusters to see the cluster with the sample application deployed.
To validate that the demo deployed correctly, run:
Your output will look like this:
Now that the application is deployed to Kubernetes Engine you can generate log data and use Cloud Logging and other tools to view it.
The sample application that Terraform deployed serves up a simple web page.
Each time you open this application in your browser the application will publish log events to Cloud Logging. When you refresh the page a few times to produce several log events.
To get the URL for the application page, perform the following steps:
IP:Port
URL value. Open a new browser and paste the URL. The browser should return a screen that looks similar to the following:Cloud Logging provides a UI for viewing log events. Basic search and filtering features are provided, which can be useful when debugging system issues.
Cloud Logging is best suited to exploring more recent log events. Users requiring longer-term storage of log events should consider some of the tools you'll explore in the following sections.
To access the cloud Logging console perform the following steps:
On the Logging console, you can build queries using Query builder, or try out various features like log fields, time zone, etc.
The Terraform configuration built out two Log Export Sinks. To view the sinks perform the following steps:
Log events can be stored in Cloud Storage, an object storage system suitable for archiving data.
Policies can be configured for Cloud Storage buckets that, for instance, allow aging data to expire and be deleted while more recent data can be stored with a variety of storage classes affecting price and availability.
The Terraform configuration created a Cloud Storage Bucket named stackdriver-gke-logging- to which logs will be exported for medium to long-term archival.
In this example, the Storage Class for the bucket is defined as Nearline because the logs should be infrequently accessed in a normal production environment (this will help to manage the costs of medium-term storage). In a production scenario, this bucket may also include a lifecycle policy that moves the content to Coldline storage for cheaper long-term storage of logs.
To access the logs in Cloud Storage perform the following steps:
stackdriver-gke-logging-<random-Id>
, and click on the name.If you come back to the bucket towards the end of your lab you might see folders corresponding to pods running in the cluster (e.g. autoscaler, dnsmasq, etc.).
You can click into any of the folders to browse specific log details like heapster, kubedns, sidecar, etc.
Log events can be configured to be published to BigQuery, a data warehouse tool that supports fast, sophisticated, querying over large data sets.
The Terraform configuration will create a BigQuery DataSet named gke_logs_dataset
. This dataset will be set up to include all Kubernetes Engine related logs for the last hour (by setting a Default Table Expiration for the dataset). Kubernetes Engine container logs will be pushed to the dataset.
To access the logs in BigQuery, perform the following steps:
From the Navigation menu, in the Big Data section, click on BigQuery, if Welcome to BigQuery in the Cloud Console
message box opens. Click Done.
In the left menu, click on your project name. You should see a dataset named gke_logs_dataset. Expand this dataset to view the tables that exist.
Click on one of the tables to view the table details.
Review the schema of the table to note the column names and their data types. This information can be used in the next step when you query the table to look at the data.
Click Query to perform a custom query against the table.
This adds a query to the Query Editor, but it has a syntax error.
Edit the query to add an asterisk (*) after Select to pull in all details from the current table.
Select *
query is generally very expensive and not advised. For this lab the dataset is limited to only the last hour of logs, so the overall dataset is relatively small.
Click Run to execute the query and return some results from the table.
The results window should display some rows and columns. You can scroll through the various rows of data that are returned. If you want, execute some custom queries that filter for specific data based on the results that were shown in the original query.
Click Check my progress to verify your performed task. If BigQuery sink written logs in BigQuery dataset, you will see an assessment score.
Since Terraform tracks the resources it created, it is able to tear them all down.
The credentials that Terraform is using do not provide the necessary permissions to create resources in the selected projects.
Ensure that the account listed in gcloud config list
has necessary permissions to create resources.
If it does, regenerate the application default credentials using gcloud auth application-default login
.
Once the Terraform configuration is complete the Cloud Storage Bucket will be created, but it is not always populated immediately with log data from the Kubernetes Engine cluster.
Give the process some time because it can take up to 2 to 3 hours before the first entries start appearing. Learn more about Cloud Storage in the View logs in sink destinations documentation.
Once the Terraform configuration is complete the BigQuery Dataset will be created but it will not always have tables created in it by the time you go to review the results.
The tables are rarely populated immediately. Give the process some time (minimum of 5 minutes) before determining that something is not working properly.
Congratulations on successfully completing the Cloud Logging on Kubernetes Engine lab! You used Terraform to provision a GKE cluster and practiced the full logging lifecycle by generating application logs, viewing them in Cloud Logging, and exporting them to both Cloud Storage and BigQuery for archival and analysis. This hands-on experience has equipped you with the core skills needed to aggregate, view, and export logs from GKE, which are critical for monitoring and debugging applications in a production environment.
...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.
Manual Last Updated: June 24, 2025
Lab Last Tested: June 24, 2025
Copyright 2025 Google LLC. All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.
此内容目前不可用
一旦可用,我们会通过电子邮件告知您
太好了!
一旦可用,我们会通过电子邮件告知您
一次一个实验
确认结束所有现有实验并开始此实验