arrow_back

Kubernetes Basics v1.6

Sign in Join
Test and share your knowledge with our community!
done
Get access to over 700 hands-on labs, skill badges, and courses

Kubernetes Basics v1.6

Lab 3 hours universal_currency_alt 5 Credits show_chart Introductory
info This lab may incorporate AI tools to support your learning.
Test and share your knowledge with our community!
done
Get access to over 700 hands-on labs, skill badges, and courses

Overview

In this lab, you learn how to:

  • Provision a Kubernetes cluster using Google Kubernetes Engine.
  • Deploy and manage Docker containers using kubectl.
  • Split an application into microservices using Kubernetes' Deployments and Services.

You use Kubernetes Engine and its Kubernetes API to deploy, manage, and upgrade applications. You use an example application called "app" to complete the labs.

Note: App is hosted on GitHub. It's a 12-Factor application with the following Docker images:

  • Monolith: includes auth and hello services.
  • Auth microservice: generates JWT tokens for authenticated users.
  • Hello microservice: greets authenticated users.
  • nginx: frontend to the auth and hello services.

Setup

Start the lab

For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.

  1. Sign in to Qwiklabs using an incognito window.

  2. Note the lab's access time (for example, 1:15:00), and make sure you can finish within that time.
    There is no pause feature. You can restart if needed, but you have to start at the beginning.

  3. When ready, click Start lab.

  4. Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.

  5. Click Open Google Console.

  6. Click Use another account and copy/paste credentials for this lab into the prompts.
    If you use other credentials, you'll receive errors or incur charges.

  7. Accept the terms and skip the recovery resource page.

Enable APIs

Make sure the following APIs are enabled in Cloud Platform Console:

  • Kubernetes Engine API
  • Container Registry API
  1. Expand the Navigation menu (Navigation menu icon), then click APIs & services.

  2. Scroll down and confirm that your APIs are enabled.

  3. If an API is missing, click ENABLE APIS AND SERVICES at the top, search for the API by name, and enable it for your project.

Activate Google Cloud Shell

Google Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud.

Google Cloud Shell provides command-line access to your Google Cloud resources.

  1. In Cloud console, on the top right toolbar, click the Open Cloud Shell button.

    Highlighted Cloud Shell icon

  2. Click Continue.

It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:

Project ID highlighted in the Cloud Shell Terminal

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

  • You can list the active account name with this command:
gcloud auth list

Output:

Credentialed accounts: - @.com (active)

Example output:

Credentialed accounts: - google1623327_student@qwiklabs.net
  • You can list the project ID with this command:
gcloud config list project

Output:

[core] project =

Example output:

[core] project = qwiklabs-gcp-44776a13dea667a6 Note: Full documentation of gcloud is available in the gcloud CLI overview guide .

Get the code

  • Get the sample code from the Git repository:
git clone https://github.com/googlecodelabs/orchestrate-with-kubernetes.git

Review

  • Review the app layout:
cd orchestrate-with-kubernetes/kubernetes ls

You'll see the following structure.

deployments/

Deployment manifests

nginx/

nginx config files

pods/

pod manifests

services/

Services manifests

tls/

TLS certificates

cleanup.sh

Cleanup script

Now that you have the code, it's time to try Kubernetes.

A quick demo of Kubernetes

Start a Kubernetes cluster

  1. Define your zone as a project default zone. This way you do not need to specify --zone parameter in gcloud commands:
gcloud config set compute/zone us-central1-a
  1. In Cloud Shell, run the following command to start a Kubernetes cluster called bootcamp that runs 5 nodes:
gcloud container clusters create bootcamp --num-nodes 5 --scopes "https://www.googleapis.com/auth/projecthosting,storage-rw"

The scopes argument provides access to project hosting and Google Cloud Storage APIs that you'll use later.

Note: It takes several minutes to create a cluster as Kubernetes Engine provisions virtual machines for you. It spins up one or more master nodes and multiple configured worker nodes. This is one of the advantages of a managed service.

Click Check my progress to verify the objective. Create a Kubernetes cluster

Explore the Kubernetes cluster

  1. After the cluster is created, check your installed version of Kubernetes using the kubectl version command:
kubectl version Note: The gcloud container clusters create command automatically authenticated kubectl for you.
  1. Use kubectl cluster-info to find out more about the cluster:
kubectl cluster-info
  1. To view your running nodes in Cloud Platform Console, open the Navigation menu and go to Compute Engine > VM Instances.
Note: Your Kubernetes cluster is now ready for use! Congratulations!

Bash completion (Optional)

Kubernetes comes with auto-completion. You can use the kubectl completion command and the built-in source command to set this up.

  1. Run this command:
source <(kubectl completion bash)
  1. Press Tab to display a list of available commands.

  2. Try the following examples:

kubectl <TAB><TAB>

You can also complete a partial command:

kubectl co<TAB><TAB>

This feature makes using kubectl even easier.

Run and deploy a container

The easiest way to get started with Kubernetes is to use the kubectl create deployment command.

  1. Use kubectl create deployment to launch a single instance of the nginx container:
kubectl create deployment nginx --image=nginx:1.10.0

Note: In Kubernetes, all containers run in pods. And in this command, Kubernetes created what is called a deployment behind the scenes, and runs a single pod with the nginx container in it.

A deployment keeps a given number of pods up and running even when the nodes they run on fail. In this case, you run the default number of pods, which is 1. You'll learn more about deployments later.
  1. Use the kubectl get pods command to view the pod running the nginx container:
kubectl get pods
  1. Use the kubectl expose command to expose the nginx container outside Kubernetes:
kubectl expose deployment nginx --port 80 --type LoadBalancer Note: Kubernetes created a service and an external load balancer with a public IP address attached to it (you will learn about services later). The IP address remains the same for the life of the service. Any client who hits that public IP address (for example an end user or another container) is routed to pods behind the service. In this case, that would be the nginx pod.
  1. Use the kubectl get command to view the new service:
kubectl get services

You'll see an external IP that you can use to test and contact the nginx container remotely.

It may take a few seconds before the ExternalIP field is populated for your service. This is normal—just re-run the kubectl get services command every few seconds until the field is populated.
  1. Use the kubectl scale command to scale up the number of backend applications (pods) running on your service using.
kubectl scale deployment nginx --replicas 3

This is useful when you want to increase workload for a web application that is becoming more popular:

  1. Get the pods one more time to confirm that Kubernetes has updated the number of pods:
kubectl get pods
  1. Use the kubectl get services command again to confirm that your external IP address has not changed:
kubectl get services
  1. Use the external IP address with the curl command to test your demo application:
curl http://<External IP>:80

Kubernetes supports an easy-to-use workflow out of the box using the kubectl run, expose, and scale commands.

Clean up

  • Clean up nginx by running the following commands.
kubectl delete deployment nginx kubectl delete service nginx

Now that you've seen a quick tour of Kubernetes, it's time to dive into each of the components and abstractions.

Note: You covered a lot of information. The rest of this lab goes over these concepts in depth. You can always come back to this demo if you need to see it again.

Pods

Investigate pods in more detail.

Creating pods

Pods can be created using pod configuration files.

  1. Explore the built-in pod documentation using the kubectl explain command:
kubectl explain pods Note: While you explore the Kubernetes API, kubectl explain will be one of the most common commands you use. Note how you used it above to investigate an API object and how you will use it below to check on various properties of API objects.
  1. Explore the monolith pod's configuration file.
cat pods/monolith.yaml Note: The pod is made up of one container (called monolith). You pass a few arguments to the container when it starts up and open port 80 for HTTP traffic.
  1. Use the kubectl explain command with the .spec option to view more information about API objects. This example inspects containers.
kubectl explain pods.spec.containers

Explore the rest of the API before you continue.

  1. Create the monolith pod using kubectl create:
kubectl create -f pods/monolith.yaml

Click Check my progress to verify the objective. Create the monolith pod

  1. Use the kubectl get pods command to list all pods running in the default namespace:
kubectl get pods Note: >It may take a few seconds before the monolith pod is up and running, because the monolith container image must be pulled from the Docker Hub before you can run it.
  1. When the pod is running, use the kubectl describe command to get more information about the monolith pod:
kubectl describe pods monolith

You'll see a lot of the information about the monolith pod, including the pod IP address and the event log. This information will be useful when troubleshooting.

Note: It's time for a quick knowledge check. Answer the following questions about the monolith pod.

  • What is the pod IP address?
  • Which node is the pod running on?
  • Which containers are running in the pod?
  • Which labels are attached to the pod?
  • Which arguments are set on the container?

As you can see, Kubernetes makes it easy to create pods by describing them in configuration files and to view information about them when they are running. At this point, you can create all the pods your deployment requires!

Interacting with pods

Pods are allocated a private IP address by default that cannot be reached outside of the cluster. Use the kubectl port-forward command to map a local port to a port inside the monolith pod.

Use two terminals: one to run the kubectl port-forward command, and the other to issue curl commands.

  1. Click the + button in Cloud Shell to open a new terminal.

Cloud Shell menu bar highlighting the open a new terminal button

  1. Run the following command to set up port-forwarding from a local port, 10080, to a pod port, 80 (where your container is listening):
kubectl port-forward monolith 10080:80
  1. To access your pod, return to the first terminal window and run the following curl command:
curl http://127.0.0.1:10080

You get a friendly "Hello" back from the container.

  1. See what happens when you hit a secure endpoint:
curl http://127.0.0.1:10080/secure

You should get an error.

Note: You get an error because you need to include an auth token in your request.
  1. Log in to get an auth token from monolith:
curl -u user http://127.0.0.1:10080/login
  1. At the login prompt, enter the password as password to sign in.
Note: Logging in causes a JWT token to be printed out. You'll use it to test your secure endpoint with curl.
  1. Cloud Shell doesn't handle copying long strings well, so copy the token into an environment variable:
TOKEN=$(curl http://127.0.0.1:10080/login -u user|jq -r '.token')
  1. At the login prompt, enter the password as password to sign in.

  2. Access the secure endpoint again, and this time include the auth token:

curl -H "Authorization: Bearer $TOKEN" http://127.0.0.1:10080/secure Note: You should get a response back from your application letting you know it works again!
  1. Use the kubectl logs command to view logs for the monolith pod:
kubectl logs monolith

Open another terminal and use the -f flag to get a stream of logs in real-time!

  1. To create the third terminal, click the + button in Cloud Shell and run the following command:
kubectl logs -f monolith
  1. Use curl in terminal 1 to interact with monolith. And you see logs update in terminal 3:
curl http://127.0.0.1:10080 Note: You can see the logs updating back in terminal 3.
  1. Use the kubectl exec command to run an interactive shell inside the monolith pod. This can be useful when you want to troubleshoot from within a container:
kubectl exec monolith --stdin --tty -c monolith /bin/sh
  1. Optional: In the shell, you can test external (outward facing) connectivity using the ping command:
ping -c 3 google.com
  1. Sign out of the shell:
exit

As you can see, interacting with pods is as easy as using the kubectl command. If you need to test a container remotely or get a login shell, Kubernetes provides everything you need to start.

  1. To quit kubectl port-forward and kubectl logs in terminal 2 and 3, press Ctrl+C.

Monitoring and health checks

Kubernetes supports monitoring applications in the form of readiness and liveness probes. Health checks can be performed on each container in a pod. Readiness probes indicate when a pod is "ready" to serve traffic. Liveness probes indicate whether a container is "alive."

If a liveness probe fails multiple times, the container is restarted. Liveness probes that continue to fail cause a pod to enter a crash loop. If a readiness check fails, the container is marked as not ready and is removed from any load balancers.

In this lab, you deploy a new pod named healthy-monolith, which is largely based on the monolith pod with the addition of readiness and liveness probes.

In this lab, you learn how to:

  • Create pods with readiness and liveness probes.
  • Troubleshoot failing readiness and liveness probes.

Creating pods with Liveness and Readiness Probes

  1. Explore the healthy-monolith pod configuration file.
cat pods/healthy-monolith.yaml
  1. Create the healthy-monolith pod using kubectl.
kubectl create -f pods/healthy-monolith.yaml

Click Check my progress to verify the objective. Create the healthy-monolith pod

  1. Pods are not marked ready until the readiness probe returns an HTTP 200 response. Use the kubectl describe command to view details for the healthy-monolith pod:
kubectl describe pod healthy-monolith

Readiness probes

See how Kubernetes responds to failed readiness probes. The monolith container supports the ability to force failures of its readiness and liveness probes. This enables you to simulate failures for the healthy-monolith pod.

  1. Use the kubectl port-forward command in terminal 2 to forward a local port to the health port of the healthy-monolith pod:
kubectl port-forward healthy-monolith 10081:81
  1. Force the monolith container readiness probe to fail. Use the curl command in terminal 1 to toggle the readiness probe status. Note that this command does not show any output:
curl http://127.0.0.1:10081/readiness/status
  1. Get the status of the healthy-monolith pod using the kubectl get pods -w command:
kubectl get pods healthy-monolith -w
  1. Press CTRL+C when there are 0/1 ready containers. Use the kubectl describe command to get more details about the failing readiness probe:
kubectl describe pods healthy-monolith
  1. Notice the events for the healthy-monolith pod report details about failing readiness probes.

To force the monolith container readiness probe to pass, toggle the readiness probe status by using the curl command:

curl http://127.0.0.1:10081/readiness/status
  1. Wait about 15 seconds and get the status of the healthy-monolith pod using the kubectl get pods command:
kubectl get pods healthy-monolith
  1. Press CTRL+C in terminal 2 to close the kubectl proxy (i.e port-forward) command.

Liveness probes

Building on what you learned in the previous tutorial, use the kubectl port-forward and curl commands to force the monolith container liveness probe to fail. Observe how Kubernetes responds to failing liveness probes.

  1. Use the kubectl port-forward command to forward a local port to the health port of the healthy-monolith pod in terminal 2.
kubectl port-forward healthy-monolith 10081:81
  1. To force the monolith container readiness probe to pass, toggle the readiness probe status by using the curl command in another terminal:
curl http://127.0.0.1:10081/healthz/status
  1. Get the status of the healthy-monolith pod using the kubectl get pods -w command:
kubectl get pods healthy-monolith -w
  1. When a liveness probe fails, the container is restarted. Once restarted, the healthy-monolith pod should return to a healthy state. Press Ctrl+C to exit that command when the pod restarts. Note the restart count.

  2. Use the kubectl describe command to get more details about the failing liveness probe. You can see the related events for when the liveness probe failed and the pod was restarted:

kubectl describe pods healthy-monolith
  1. When you are finished, press Ctrl+C in terminal 2 to close the kubectl proxy command.
Note: You learned about Kubernetes pods and Kubernetes support for application monitoring using liveness and readiness probes. You also learned how to add readiness and liveness probes to pods and what happens when probes fail. Congratulations!

Services

Next steps:

  • Create a service.
  • Use label selectors to expose a limited set of pods externally.

Creating a Service

Before creating your services, create a secure pod with an nginx server called secure-monolith that can handle HTTPS traffic.

Create two volumes that the secure pod will use to bring in (or consume) data.

The first volume of type secret stores TLS cert files for your nginx server.

  1. Return to terminal 1 and create the first volume using the following command:
kubectl create secret generic tls-certs --from-file tls/ Note: This uploads cert files from the local directory tls/ and stores them in a secret called tls-certs.
  1. Create the second volume of type ConfigMap to hold nginx's configuration file:
kubectl create configmap nginx-proxy-conf --from-file nginx/proxy.conf Note: This uploads the proxy.conf file to the cluster and calls the ConfigMap nginx-proxy-conf.
  1. Explore the proxy.conf file that nginx will use:
cat nginx/proxy.conf

The file specifies that SSL is ON and specifies the location of cert files in the container file system.

Note: The files really exist in the secret volume, so you need to mount the volume to the container's file system.
  1. Explore the secure-monolith pod configuration file:
cat pods/secure-monolith.yaml Note: Under volumes, the pod attaches the two volumes you created. And under volumeMounts, it mounts the tls-certs volume to the container's file system so nginx can consume the data.
  1. Run the following command to create the secure-monolith pod with its configuration data:
kubectl create -f pods/secure-monolith.yaml

Now that you have a secure pod, expose the secure-monolith pod externally using a Kubernetes service.

  1. Explore the monolith service configuration file:
cat services/monolith.yaml Note:

The file contains:

  • The selector that finds and exposes pods with labels app=monolith and secure=enabled
  • targetPort and nodePort that forward external traffic from port 31000 to nginx on port 443.
  1. Use the kubectl create command to create the monolith service from the monolith service configuration file:
kubectl create -f services/monolith.yaml Note: The type: NodePort in the Service's yaml file means that it uses a port on each cluster node to expose the service. This means that it's possible to have port collisions if another app tries to bind to port 31000 on one of your servers.

Normally, Kubernetes handles this port assignment for you. In this lab, you chose one so that it's easier to configure health checks later.

  1. Use the gcloud compute firewall-rules command to allow traffic to the monolith service on the exposed nodeport:
gcloud compute firewall-rules create allow-monolith-nodeport --allow=tcp:31000

Now that everything is set up, you should be able to test the secure-monolith service from outside the cluster without using port forwarding.

  1. Get an IP address for one of your nodes:
gcloud compute instances list
  1. Try to open the URL in your browser:
https://<EXTERNAL_IP>:31000 Note: That timed out or refused to connect. What's going wrong?

Note: It's time for a quick knowledge check. Use the following commands to answer the questions below.

kubectl get services monolith

kubectl describe services monolith

Questions:

  • Why can't you get a response from the monolith service?
  • How many endpoints does the monolith service have?
  • What labels must a pod have to be picked up by the monolith service?

Adding labels to pods

Currently the monolith service does not have any endpoints. One way to troubleshoot an issue like this is to use the kubectl get pods command with a label query.

  1. Determine that there are several pods running with the monolith label:
kubectl get pods -l "app=monolith"
  1. But what about app=monolith and secure=enabled?
kubectl get pods -l "app=monolith,secure=enabled" Note: Notice that this label query does not print any results. You need to add the "secure=enabled" label to them.
  1. Use the kubectl label command to add the missing secure=enabled label to the secure-monolith pod:
kubectl label pods secure-monolith 'secure=enabled'

Click Check my progress to verify the objective. Create a secret, service, firewall rule and pod with label

  1. Check to see that your labels are updated:
kubectl get pods secure-monolith --show-labels
  1. View the list of endpoints on the monolith service:
kubectl get endpoints monolith Note: And you have one!
  1. Test this by testing one of your nodes again.
gcloud compute instances list | grep gke-
  1. Open the following URL in your browser. You will need to click through the SSL warning because secure-monolith is using a self-signed certificate:
https://<EXTERNAL_IP>:31000

Congratulations!

You learned about Kubernetes services and how pod endpoints are selected using labels. You also learned about Kubernetes volumes and how to configure applications using ConfigMaps and secrets.

End your lab

When you have completed your lab, click End Lab. Google Cloud Skills Boost removes the resources you’ve used and cleans the account for you.

You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.

The number of stars indicates the following:

  • 1 star = Very dissatisfied
  • 2 stars = Dissatisfied
  • 3 stars = Neutral
  • 4 stars = Satisfied
  • 5 stars = Very satisfied

You can close the dialog box if you don't want to provide feedback.

For feedback, suggestions, or corrections, please use the Support tab.

Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

This content is not currently available

We will notify you via email when it becomes available

Great!

We will contact you via email if it becomes available