
Before you begin
- Labs create a Google Cloud project and resources for a fixed time
- Labs have a time limit and no pause feature. If you end the lab, you'll have to restart from the beginning.
- On the top left of your screen, click Start lab to begin
Create your cluster
/ 20
Create your pod
/ 30
Create a Kubernetes Service
/ 30
Scale up your service
/ 20
The goal of this hands-on lab is for you to turn code that you have developed into a replicated application running on Kubernetes, which is running on Kubernetes Engine. For this lab the code will be a simple Hello World node.js app.
Here's a diagram of the various parts in play in this lab, to help you understand how the pieces fit together with one another. Use this as a reference as you progress through the lab; it should all make sense by the time you get to the end (but feel free to ignore this for now).
Kubernetes is an open source project (available on kubernetes.io) which can run on many different environments, from laptops to high-availability multi-node clusters; from public clouds to on-premise deployments; from virtual machines to bare metal.
For the purpose of this lab, using a managed environment such as Kubernetes Engine (a Google-hosted version of Kubernetes running on Compute Engine) will allow you to focus more on experiencing Kubernetes rather than setting up the underlying infrastructure.
vim
, emacs
, or nano
will be helpful.Students are to type the commands themselves, to help encourage learning of the core concepts. Many labs will include a code block that contains the required commands. You can easily copy and paste the commands from the code block into the appropriate places during the lab.
Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources are made available to you.
This hands-on lab lets you do the lab activities in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
Click the Start Lab button. If you need to pay for the lab, a dialog opens for you to select your payment method. On the left is the Lab Details pane with the following:
Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).
The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
If necessary, copy the Username below and paste it into the Sign in dialog.
You can also find the Username in the Lab Details pane.
Click Next.
Copy the Password below and paste it into the Welcome dialog.
You can also find the Password in the Lab Details pane.
Click Next.
Click through the subsequent pages:
After a few moments, the Google Cloud console opens in this tab.
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
Click Activate Cloud Shell at the top of the Google Cloud console.
Click through the following windows:
When you are connected, you are already authenticated, and the project is set to your Project_ID,
gcloud
is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
Output:
Output:
gcloud
, in Google Cloud, refer to the gcloud CLI overview guide.
1.Using Cloud Shell, write a simple Node.js server that you'll deploy to Kubernetes Engine:
vi
is used here, but nano
and emacs
are also available in Cloud Shell. You can also use the Web-editor feature of CloudShell as described in the How Cloud Shell works guide.
server.js
file by pressing Esc then:node
executable installed, run this command to start the node server (the command produces no output):8080
.A new browser tab will open to display your results:
Next you will package this application in a Docker container.
Dockerfile
that describes the image you want to build. Docker container images can extend from other existing images, so for this image, we'll extend from an existing Node image:This "recipe" for the Docker image will:
node
image found on the Docker hub.8080
.server.js
file to the image.Dockerfile
by pressing ESC, then type:PROJECT_ID
with your Project ID, found in the Console and the Lab Details section of the lab:It'll take some time to download and extract everything, but you can see the progress bars as the image builds.
Once complete, test the image locally by running a Docker container as a daemon on port 8080 from your newly-created container image.
PROJECT_ID
with your Project ID, found in the Console and the Lab Details section of the lab:Your output should look something like this:
curl
from your Cloud Shell prompt:This is the output you should see:
docker run
command can be found in the Docker run reference.
Next, stop the running container.
Your output you should look like this:
[CONTAINER ID]
with the value provided from the previous step:Your console output should resemble the following (your container ID):
Now that the image is working as intended, push it to the Google Container Registry, a private repository for your Docker images, accessible from your Google Cloud projects.
If Prompted, Do you want to continue (Y/n)?. Enter Y.
PROJECT_ID
with your Project ID, found in the Console or the Lab Details section of the lab:The initial push may take a few minutes to complete. You'll see the progress bars as it builds.
Now you have a project-wide Docker image available which Kubernetes can access and orchestrate.
gcr.io
). In your own environment you can be more specific about which zone and bucket to use. To learn more, refer to .
Now you're ready to create your Kubernetes Engine cluster. A cluster consists of a Kubernetes master API server hosted by Google and a set of worker nodes. The worker nodes are Compute Engine virtual machines.
gcloud
(replace PROJECT_ID
with your Project ID, found in the console and in the Lab Details section of the lab):You can safely ignore warnings that come up when the cluster builds.
The console output should look like this:
Alternatively, you can create this cluster through the Console by opening the Navigation menu and selecting Kubernetes Engine > Kubernetes clusters > Create.
If you select Navigation menu > Kubernetes Engine, you'll see that you have a fully-functioning Kubernetes cluster powered by Kubernetes Engine.
It's time to deploy your own containerized application to the Kubernetes cluster! From now on you'll use the kubectl
command line (already set up in your Cloud Shell environment).
Click Check my progress below to check your lab progress.
A Kubernetes pod is a group of containers tied together for administration and networking purposes. It can contain single or multiple containers. Here you'll use one container built with your Node.js image stored in your private container registry. It will serve content on port 8080.
kubectl run
command (replace PROJECT_ID
with your Project ID, found in the console and in the Connection Details section of the lab):Output:
As you can see, you've created a deployment object. Deployments are the recommended way to create and scale pods. Here, a new deployment manages a single pod replica running the hello-node:v1
image.
Output:
Output:
Now is a good time to go through some interesting kubectl
commands. None of these will change the state of the cluster. To view the full reference documentation, refer to Command line tool (kubectl):
And for troubleshooting :
You now need to make your pod accessible to the outside world.
Click Check my progress below to check your lab progress.
By default, the pod is only accessible by its internal IP within the cluster. In order to make the hello-node
container accessible from outside the Kubernetes virtual network, you have to expose the pod as a Kubernetes service.
kubectl expose
command combined with the --type="LoadBalancer"
flag. This flag is required for the creation of an externally accessible IP:Output:
The flag used in this command specifies that it is using the load-balancer provided by the underlying infrastructure (in this case the Compute Engine load balancer). Note that you expose the deployment, and not the pod, directly. This will cause the resulting service to load balance traffic across all pods managed by the deployment (in this case only 1 pod, but you will add more replicas later).
The Kubernetes master creates the load balancer and related Compute Engine forwarding rules, target pools, and firewall rules to make the service fully accessible from outside of Google Cloud.
kubectl
to list all the cluster services:This is the output you should see:
There are 2 IP addresses listed for your hello-node service, both serving port 8080. The CLUSTER-IP
is the internal IP that is only visible inside your cloud virtual network; the EXTERNAL-IP
is the external load-balanced IP.
EXTERNAL-IP
may take several minutes to become available and visible. If the EXTERNAL-IP
is missing, wait a few minutes and run the command again.
http://<EXTERNAL_IP>:8080
At this point you've gained several features from moving to containers and Kubernetes - you do not need to specify on which host to run your workload and you also benefit from service monitoring and restart. Now see what else can be gained from your new Kubernetes infrastructure.
Click Check my progress below to check your lab progress.
One of the powerful features offered by Kubernetes is how easy it is to scale your application. Suppose you suddenly need more capacity. You can tell the replication controller to manage a new number of replicas for your pod.
Output:
Output:
Re-run the above command until you see all 4 replicas created.
This is the output you should see:
A declarative approach is being used here. Rather than starting or stopping new instances, you declare how many instances should be running at all times. Kubernetes reconciliation loops make sure that reality matches what you requested and takes action if needed.
Here's a diagram summarizing the state of your Kubernetes cluster:
Click Check my progress below to check your lab progress.
At some point the application that you've deployed to production will require bug fixes or additional features. Kubernetes helps you deploy a new version to production without impacting your users.
server.js
:server.js
file by pressing Esc then:Now you can build and publish a new container image to the registry with an incremented tag (v2
in this case).
PROJECT_ID
with your lab project ID:Kubernetes will smoothly update your replication controller to the new version of the application. In order to change the image label for your running container, you will edit the existing hello-node deployment
and change the image from gcr.io/PROJECT_ID/hello-node:v1
to gcr.io/PROJECT_ID/hello-node:v2
.
kubectl edit
command:It opens a text editor displaying the full deployment yaml configuration. It isn't necessary to understand the full yaml config right now, just understand that by updating the spec.template.spec.containers.image
field in the config you are telling the deployment to update the pods with the new image.
Spec
> containers
> image
and change the version number from v1 to v2:This is the output you should see:
New pods will be created with the new image and the old pods will be deleted.
This is the output you should see (you may need to rerun the above command to see the following):
While this is happening, the users of your services shouldn't see any interruption. After a little while they'll start accessing the new version of your application. You can find more details on rolling updates in the Performing a Rolling Update documentation.
Hopefully with these deployment, scaling, and updated features, once you've set up your Kubernetes Engine cluster, you'll agree that Kubernetes will help you focus on the application rather than the infrastructure.
Test your knowledge about Google cloud Platform by taking our quiz. (Please select multiple correct options.)
This concludes this hands-on lab with Kubernetes. You've only scratched the surface of this technology. Explore with your own pods, replication controllers, and services - and also check out liveness probes (health checks) and consider using the Kubernetes API directly.
Try Managing Deployments using Kubernetes Engine, or check out these suggestions:
...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.
Manual Last Updated March 14, 2024
Lab Last Tested March 14, 2024
Copyright 2025 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.
此内容目前不可用
一旦可用,我们会通过电子邮件告知您
太好了!
一旦可用,我们会通过电子邮件告知您
One lab at a time
Confirm to end all existing labs and start this one