
Before you begin
- Labs create a Google Cloud project and resources for a fixed time
- Labs have a time limit and no pause feature. If you end the lab, you'll have to restart from the beginning.
- On the top left of your screen, click Start lab to begin
Configuring a GKE cluster with Kubernetes Engine Monitoring and deploy a sample workload
/ 30
Deploy the GCP-GKE-Monitor-Test application
/ 35
Creating Alerts with Stackdriver Kubernetes Engine Monitoring
/ 35
In this lab, you build a GKE cluster and then deploy pods for use with Kubernetes Engine Monitoring. You will create charts and a custom dashboard, work with custom metrics, and create and respond to alerts.
In this lab, you learn how to perform the following tasks:
For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.
Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:
Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).
The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
If necessary, copy the Username below and paste it into the Sign in dialog.
You can also find the Username in the Lab Details panel.
Click Next.
Copy the Password below and paste it into the Welcome dialog.
You can also find the Password in the Lab Details panel.
Click Next.
Click through the subsequent pages:
After a few moments, the Google Cloud console opens in this tab.
Google Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud.
Google Cloud Shell provides command-line access to your Google Cloud resources.
In Cloud console, on the top right toolbar, click the Open Cloud Shell button.
Click Continue.
It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
Output:
Example output:
Output:
Example output:
Google Kubernetes Engine includes managed support for Monitoring.
In this task, you will create a new cluster with Kubernetes Engine Monitoring support and then perform typical monitoring tasks using the Kubernetes Engine monitoring and logging interface.
In this task, you will create a GKE cluster with Kubernetes Engine Monitoring enabled. Then you will deploy sample workloads to your GKE cluster for later use in this exercise.
In the Google Cloud console, in the Navigation menu (), click Kubernetes Engine > Clusters.
Click the cluster name standard-cluster-1 to view the cluster details.
You can scroll down the page to view more details.
Under the Features heading, you can see the Logging and Cloud Monitoring settings that set the logging type to System.
You will now deploy a sample workload to the default namespace of your GKE cluster. This workload consists of a deployment of three pods running a simple Hello World demo application. Later in this lab exercise, you will be able to monitor the health of this workload in Monitoring.
This deployment manifest creates three Pods running a simple Hello World demo application.
The output of this command will show that the hello-v2
application is running in the default namespace:
Click Check my progress to verify the objective.
You will now deploy the GCP-GKE-Monitor-Test application to the default namespace of your GKE cluster. This workload has a deployment consisting of a single pod that is then exposed to the internet via a LoadBalancer service.
Alternatively, you can also use Docker directly to build and push an image to gcr.io:
gcp-gke-monitor-test.yaml
file with the Docker image you just pushed to gcr.io:The output of this command will show that the hello-v2
application is running in the default namespace:
The output of this command will show that the gcp-gke-monitor-test-service
is running in the default namespace. You may need to run this command multiple times until this service is assigned an external IP address.
Click Check my progress to verify the objective.
In this task, you will use the GCP-GKE-Monitor-Test application to explore different aspects of Kubernetes Engine Monitoring. The tool is composed of four sections:
In the first section, Generate CPU Load, you have buttons to start and stop a CPU Load Generator. The tool starts a loop of math operations which will consume an entire CPU core. To prevent losing control of the pod due to CPU saturation, the loop yields the processor periodically for 100 nanoseconds. This allows you to quickly stop the CPU Load Generator without killing the pod.
The second section, Custom Metrics, allows you to explore custom metric monitoring within Cloud Monitoring. When you click Start Monitoring, the tool first creates the necessary Custom Metric Descriptor, and then starts a loop which sends the custom metric values to Monitoring every 60 seconds. The custom metrics coded into this tool are designed to simulate an application that can keep track of the number of active users connected, and then report that number to an external service.
To take advantage of these custom metrics, some additional instrumentation may be required within your application's code. In this lab exercise you can simulate users connecting and disconnecting by clicking the Increase and Decrease Users buttons.
Also keep in mind that although the web tool will allow you to change the number of users in real time (just as users may connect and disconnect in real life), the Cloud Monitoring APIs only allow the tool to send its current value once per minute. This means your Cloud Monitoring charts will not reflect changes which occur between the per-minute updates.
The third section, Log Test, allows you to send different text strings to the container's standard output (the console), which is then periodically collected by Cloud Monitoring and stored as log messages associated with the pod and container. You can optionally enable Debug-level logging to see more entries in the logs. This will allow you to see messages in the logs when you increase the number of users in the Custom Metrics section, or when you enable or disable the CPU Load Generator. Note that these logs are sent in plain-text format to simulate legacy applications which do not support JSON formatted messages. When you view the logs in Logging you will notice that your pod's JSON-based Kubernetes event logs have much more robust filtering and querying options than what is available for the unstructured logs.
The fourth and final section, Crash the Pod, allows you to crash the pod with the click of a button. The tool executes a section of code with an unhandled error, which crashes the pod and triggers the deployment to restart a new pod in its place. You can use this tool to see how quickly Kubernetes Engine can recover from errors. It is also an opportunity to see the loss of session state in action because each pod maintains its own session instead of storing it in a central location. When the pod restarts, all your toggle buttons and settings return to their default values.
You will now open a web browser, connect to the GCP-GKE-Monitor-Test tool, and start the CPU load generator.
You will now start a process within the GCP-GKE-Monitor-Test tool which creates a Custom Metric Descriptor within Cloud Monitoring. Later, when the tool begins sending the custom metric data, Monitoring will associate the data with this metric descriptor. Note that Monitoring can often automatically create the custom metric descriptors for you when you send the custom metric data, but creating the descriptor manually gives you more control over the text that appears in the Monitoring interface, making it easier for you to find your data in the Metric Explorer.
You can now click the Increase and Decrease Users buttons to change the Current User Count displayed below the STATUS text.
It may take 2-3 minutes for the first data point to appear in Monitoring. You will check this custom metric in Cloud Monitoring in a later step.
You will now use the GCP-GKE-Monitor-Test tool to create sample text-based logs which you will later view in Cloud Monitoring.
In this task, you will use Kubernetes Engine Monitoring to view the current health of your GKE cluster and the two workloads running on it.
You will now setup a Monitoring workspace that's tied to your Google Cloud Project. The following steps create a new account that has a free trial of Monitoring.
On the Google Cloud console title bar, type Monitoring in the Search field, then click Monitoring (Infrastructure and application quality checks) in the search results.
Click Pin next to Observability Monitoring.
Wait for your workspace to be provisioned.
When the Monitoring dashboard opens, your workspace is ready.
You will now open and browse the three different sections of the Kubernetes Engine Monitoring interface. The three sections are: Infrastructure, Workloads, and Services.
In the Monitoring interface, click GKE in the Dashboards section to view the new monitoring interface.
Review the monitoring interface. This is a dashboard which shows the health of your GKE clusters and their workloads. Take note of the following:
The Clusters, Nodes, and Pods sections allow you to check the health of particular elements in the cluster. You can also use this to inspect the pods which are running on a particular node in the cluster.
To see the details of your cluster, click on the cluster element.
The Workloads section is very helpful, especially when looking for workloads which do not have services exposed.
The Kubernetes services section organizes the services configured in your environment by cluster, then by namespace (the administrative barrier or partition within the cluster), and then shows the various services available to users within that namespace. You can see more details on each service by clicking on their name.
The Namespaces section shows the list of namespaces within the cluster.
The monitoring interface can provide even more detail about the deployments and pods.
View all
), and then click on Metrics tab to see more metrics.Note the value of your pod's CPU request utilization. That number represents the amount of CPU resources the pod is consuming relative to what it originally requested from the cluster.
Click the X in the upper right corner of the Pod Details window.
Now, click the pod beginning with gcp-gke-monitor-test to view more detail about it.
Note that you will see slightly different information if you selected the Namespaces instead of the Pod.
Click on the Metrics tab to see more metrics such as CPU request utilization and CPU Usage Time.
In the Pod Details window, click the Logs tab to view the log activity for the pod.
This shows the log messages the pod has generated as well as a graph indicating the logging activity of the pod over time. Here you can see some of the sample logs you generated in the tool.
In Monitoring, you can create custom dashboards to display important metrics such as CPU utilization, container restarts, and more such as our custom metric for the number of connected users.
In the navigation bar at the left of the Observability Monitoring page, click on Metrics Explorer to begin building your dashboard.
Click on Select a metric.
This will filter the list to the resource types supported by the new Kubernetes Engine Monitoring tools.
Select Kubernetes Container > Popular Metrics > CPU request utilization.
Click Apply.
This is the same CPU request utilization chart we saw earlier when we examined the fluentbit-gke-xxxx pod, but now the chart will display that metric for all the pods.
Now click the Save Chart button in the upper right corner of the screen.
Give the chart title a name such as Container CPU Request, and then click Dashboard.
The chart name should represent only this chart. You'll be able to give the entire dashboard a name in the next step.
Click New Dashboard.
Name your dashboard Container Dashboard.
Click Save Chart.
Now you can launch your dashboard by clicking Dashboards in the navigation pane, and then selecting the name of your new dashboard.
You now have a dashboard showing a single chart with a standard Monitoring metric. Next, you will create a chart for our custom Monitoring metric and then add it to this dashboard.
Click Metrics explorer.
Click on Select a metric.
Select Kubernetes Pod > Custom metrics > Web App - Active Users.
Click Apply.
Click on Save Chart.
Give the new chart a name, such as Active Users.
Select Container Dashboard from the dashboards dropdown.
Click Save Chart.
Navigate back to your Container dashboard and click the Gear icons to display the settings menu.
Then click Legends > Table to display the text under each chart.
Click the three vertical bars next to the word Value at the right of each chart.
This displays a popup which contains the various labels which were included in the timeSeries data sent by our application server. You can use this information to filter or even aggregate the data in the chart.
In this task, you will configure an alert within Kubernetes Engine Monitoring and then use the dashboard to identify and respond to the incident.
You will now create an alert policy to detect high CPU utilization among the containers.
Click on dropdown arrow next to Notification Channels, then click on Manage Notification Channels, then click on Notification channels page will open in new tab.
Scroll down the page and click on ADD NEW for Email.
Enter your personal email in the Email Address field and a Display name.
Click Save.
Go back to the previous Create alerting policy tab.
Click on Notification Channels again, then click on the Refresh icon to get the display name you mentioned in the previous step. Click Notification Channels again if needed.
Now, select your Display name and click OK.
Name the alert CPU request utilization
.
Click Next.
Review the alert and click Create Policy.
Click Check my progress to verify the objective.
Now, you will return to the monitoring dashboard where an incident is being reported on one of the containers.
When you have completed your lab, click End Lab. Google Cloud Skills Boost removes the resources you’ve used and cleans the account for you.
You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.
The number of stars indicates the following:
You can close the dialog box if you don't want to provide feedback.
For feedback, suggestions, or corrections, please use the Support tab.
Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.
This content is not currently available
We will notify you via email when it becomes available
Great!
We will contact you via email if it becomes available
One lab at a time
Confirm to end all existing labs and start this one