
Before you begin
- Labs create a Google Cloud project and resources for a fixed time
- Labs have a time limit and no pause feature. If you end the lab, you'll have to restart from the beginning.
- On the top left of your screen, click Start lab to begin
Create a Kubernetes cluster and launch Nginx container
/ 25
Create Monolith pods and service
/ 25
Allow traffic to the monolith service on the exposed nodeport
/ 5
Adding Labels to Pods
/ 20
Creating Deployments (Auth, Hello and Frontend)
/ 25
Kubernetes is an open source project (available on kubernetes.io) which can run on many different environments, from laptops to high-availability multi-node clusters, from public clouds to on-premise deployments, from virtual machines to bare metal.
For this lab, using a managed environment such as Kubernetes Engine allows you to focus on experiencing Kubernetes rather than setting up the underlying infrastructure. Kubernetes Engine is a managed environment for deploying containerized applications. It brings the latest innovations in developer productivity, resource efficiency, automated operations, and open source flexibility to accelerate your time to market.
In this lab you will learn how to:
kubectl
.Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources are made available to you.
This hands-on lab lets you do the lab activities in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
Click the Start Lab button. If you need to pay for the lab, a dialog opens for you to select your payment method. On the left is the Lab Details pane with the following:
Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).
The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
If necessary, copy the Username below and paste it into the Sign in dialog.
You can also find the Username in the Lab Details pane.
Click Next.
Copy the Password below and paste it into the Welcome dialog.
You can also find the Password in the Lab Details pane.
Click Next.
Click through the subsequent pages:
After a few moments, the Google Cloud console opens in this tab.
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
Click Activate Cloud Shell at the top of the Google Cloud console.
Click through the following windows:
When you are connected, you are already authenticated, and the project is set to your Project_ID,
gcloud
is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
Output:
Output:
gcloud
, in Google Cloud, refer to the gcloud CLI overview guide.
gcloud container clusters get-credentials io
command to re-authenticate.
The sample has the following layout:
Now that you have the code -- it's time to give Kubernetes a try!
The easiest way to get started with Kubernetes is to use the kubectl create
command.
Kubernetes has created a deployment -- more about deployments later, but for now all you need to know is that deployments keep the pods up and running even when the nodes they run on fail.
In Kubernetes, all containers run in a pod.
kubectl get pods
command to view the running nginx container:kubectl expose
command:So what just happened? Behind the scenes Kubernetes created an external Load Balancer with a public IP address attached to it. Any client who hits that public IP address will be routed to the pods behind the service. In this case that would be the nginx pod.
kubectl get services
command:ExternalIP
field is populated for your service. This is normal -- just re-run the kubectl get services
command every few seconds until the field populates.
And there you go! Kubernetes supports an easy to use workflow out of the box using the kubectl
run and expose commands.
Click Check my progress below to check your lab progress. If you successfully created a Kubernetes cluster and deploy a Nginx container, you'll see an assessment score.
Now that you've seen a quick tour of kubernetes, it's time to dive into each of the components and abstractions.
At the core of Kubernetes is the Pod.
Pods represent and hold a collection of one or more containers. Generally, if you have multiple containers with a hard dependency on each other, you package the containers inside a single pod.
In this example there is a pod that contains the monolith and nginx containers.
Pods also have Volumes. Volumes are data disks that live as long as the pods live, and can be used by the containers in that pod. Pods provide a shared namespace for their contents which means that the two containers inside of our example pod can communicate with each other, and they also share the attached volumes.
Pods also share a network namespace. This means that there is one IP Address per pod.
Next, a deeper dive into pods.
Pods can be created using pod configuration files. Take a moment to explore the monolith pod configuration file.
The output shows the open configuration file:
There's a few things to notice here. You'll see that:
kubectl
:kubectl get pods
command to list all pods running in the default namespace:kubectl describe
command to get more information about the monolith pod:You'll see a lot of the information about the monolith pod including the Pod IP address and the event log. This information will come in handy when troubleshooting.
Kubernetes makes it easy to create pods by describing them in configuration files and easy to view information about them when they are running. At this point you have the ability to create all the pods your deployment requires!
By default, pods are allocated a private IP address and cannot be reached outside of the cluster. Use the kubectl port-forward
command to map a local port to a port inside the monolith pod.
Open a second Cloud Shell terminal. Now you have two terminals, one to run the kubectl port-forward
command, and the other to issue curl
commands.
In the 2nd terminal, run this command to set up port-forwarding:
curl
:Yes! You got a very friendly "hello" back from your container.
curl
command to see what happens when you hit a secure endpoint:Uh oh.
password
to login.Logging in caused a JWT token to print out.
Enter the super-secret password password
again when prompted for the host password.
Use this command to copy and then use the token to hit the secure endpoint with curl
:
At this point you should get a response back from our application, letting us know everything is right in the world again.
kubectl logs
command to view the logs for the monolith
Pod.-f
flag to get a stream of the logs happening in real-time:curl
in the 1st terminal to interact with the monolith, you can see the logs updating (in the 3rd terminal):kubectl exec
command to run an interactive shell inside the Monolith Pod. This can come in handy when you want to troubleshoot from within a container:ping
command:As you can see, interacting with pods is as easy as using the kubectl
command. If you need to hit a container remotely, or get a login shell, Kubernetes provides everything you need to get up and going.
Pods aren't meant to be persistent. They can be stopped or started for many reasons - like failed liveness or readiness checks - and this leads to a problem:
What happens if you want to communicate with a set of Pods? When they get restarted they might have a different IP address.
That's where Services come in. Services provide stable endpoints for Pods.
Services use labels to determine what Pods they operate on. If Pods have the correct labels, they are automatically picked up and exposed by our services.
The level of access a service provides to a set of pods depends on the Service's type. Currently there are three types:
ClusterIP
(internal) -- the default type means that this Service is only visible inside of the cluster,NodePort
gives each node in the cluster an externally accessible IP andLoadBalancer
adds a load balancer from the cloud provider which forwards traffic from the service to Nodes within it.Now you'll learn how to:
Before you can create services, first create a secure pod that can handle https traffic.
~/orchestrate-with-kubernetes/kubernetes
directory:Now that you have a secure pod, it's time to expose the secure-monolith Pod externally.To do that, create a Kubernetes service.
(Output):
* There's a selector which is used to automatically find and expose any pods with the labels `app: monolith` and `secure: enabled`.
* Now you have to expose the nodeport here because this is how you'll forward external traffic from port 31000 to nginx (on port 443).
kubectl create
command to create the monolith service from the monolith service configuration file:(Output):
Click Check my progress below to check your lab progress. If you successfully created Monolith pods and service, you'll see an assessment score.
You're using a port to expose the service. This means that it's possible to have port collisions if another app tries to bind to port 31000 on one of your servers.
Normally, Kubernetes would handle this port assignment. In this lab you chose a port so that it's easier to configure health checks later on.
gcloud compute firewall-rules
command to allow traffic to the monolith service on the exposed nodeport:Click Check my progress below to check your lab progress. If you successfully created a firewall rule to allow TCP traffic on the 31000 port, you'll see an assessment score.
Now that everything is set up you should be able to hit the secure-monolith service from outside the cluster without using port forwarding.
curl
:Uh oh! That timed out. What's going wrong?
Note: It's time for a quick knowledge check.
Use the following commands to answer the questions below:kubectl get services monolith
kubectl describe services monolith
Questions:
Hint: it has to do with labels. You'll fix the issue in the next section.
Currently the monolith service does not have endpoints. One way to troubleshoot an issue like this is to use the kubectl get pods
command with a label query.
Notice this label query does not print any results. It seems like you need to add the "secure=enabled" label to them.
kubectl label
command to add the missing secure=enabled
label to the secure-monolith Pod. Afterwards, you can check and see that your labels have been updated.And you have one!
Bam! Houston, we have contact.
Click Check my progress below to check your lab progress. If you successfully added labels to monolith pods, you'll see an assessment score.
The goal of this lab is to get you ready for scaling and managing containers in production. That's where Deployments come in. Deployments are a declarative way to ensure that the number of Pods running is equal to the desired number of Pods, specified by the user.
The main benefit of Deployments is in abstracting away the low level details of managing Pods. Behind the scenes Deployments use Replica Sets to manage starting and stopping the Pods. If Pods need to be updated or scaled, the Deployment will handle that. Deployment also handles restarting Pods if they happen to go down for some reason.
Look at a quick example:
Pods are tied to the lifetime of the Node they are created on. In the example above, Node3 went down (taking a Pod with it). Instead of manually creating a new Pod and finding a Node for it, your Deployment created a new Pod and started it on Node2.
That's pretty cool!
It's time to combine everything you learned about Pods and Services to break up the monolith application into smaller Services using Deployments.
You're going to break the monolith app into three separate pieces:
You are ready to create deployments, one for each service. Afterwards, you'll define internal services for the auth and hello deployments and an external service for the frontend deployment. Once finished you'll be able to interact with the microservices just like with Monolith only now each piece will be able to be scaled and deployed, independently!
(Output)
The deployment is creating 1 replica, and you're using version 2.0.0 of the auth container.
When you run the kubectl create
command to create the auth deployment it will make one pod that conforms to the data in the Deployment manifest. This means you can scale the number of Pods by changing the number specified in the Replicas field.
kubectl create
command to create the auth service:EXTERNAL-IP
column status is pending.And you get a hello response back!
Click Check my progress below to check your lab progress. If you successfully created Auth, Hello and Frontend deployments, you'll see an assessment score.
Congratulations! You've developed a multi-service application using Kubernetes. The skills you've learned here will allow you to deploy complex applications on Kubernetes using a collection of deployments and services.
...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.
Manual Last Updated April 29, 2024
Lab Last Tested April 29, 2024
Copyright 2025 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.
Ta treść jest obecnie niedostępna
Kiedy dostępność się zmieni, wyślemy Ci e-maila z powiadomieniem
Świetnie
Kiedy dostępność się zmieni, skontaktujemy się z Tobą e-mailem
One lab at a time
Confirm to end all existing labs and start this one