
Before you begin
- Labs create a Google Cloud project and resources for a fixed time
- Labs have a time limit and no pause feature. If you end the lab, you'll have to restart from the beginning.
- On the top left of your screen, click Start lab to begin
Create cluster and deploy an app
/ 40
Migrate to an Optimized Nodepool
/ 20
Apply a Frontend Update
/ 20
Autoscale from Estimated Traffic
/ 20
In a challenge lab you’re given a scenario and a set of tasks. Instead of following step-by-step instructions, you will use the skills learned from the labs in the course to figure out how to complete the tasks on your own! An automated scoring system (shown on this page) will provide feedback on whether you have completed your tasks correctly.
When you take a challenge lab, you will not be taught new Google Cloud concepts. You are expected to extend your learned skills, like changing default values and reading and researching error messages to fix your own mistakes.
To score 100% you must successfully complete all tasks within the time period!
This lab is only recommended for students who are enrolled in the Optimize Costs for Google Kubernetes Engine course. Are you ready for the challenge?
Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources are made available to you.
This hands-on lab lets you do the lab activities in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
You are the lead Google Kubernetes Engine admin on a team that manages the online shop for OnlineBoutique.
You are ready to deploy your team's site to Google Kubernetes Engine but you are still looking for ways to make sure that you're able to keep costs down and performance up.
You will be responsible for deploying the OnlineBoutique app to GKE and making some configuration changes that have been recommended for cost optimization.
Here are some guidelines you've been requested to follow when deploying:
e2-standard-2 (2 vCPU, 8G memory)
.release-channel
.Before you can deploy the application, you'll need to create a cluster in the
Start small and make a zonal cluster with only two (2) nodes.
Before you deploy the shop, make sure to set up some namespaces to separate resources on your cluster in accordance with the 2 environments - dev
and prod
.
After that, deploy the application to the dev
namespace with the following command:
Click Check my progress to verify the objective.
You come to the conclusion that you should make changes to the cluster's node pool:
Create a new node pool named
Set the number of nodes to 2.
Once the new node pool is set up, migrate your application's deployments to the new nodepool by cordoning off and draining default-pool
.
Delete the default-pool once the deployments have safely migrated.
Click Check my progress to verify the objective.
You just got it all deployed, and now the dev team wants you to push a last-minute update before the upcoming release! That's ok. You know this can be done without the need to cause down time.
Set a pod disruption budget for your frontend deployment.
Name it onlineboutique-frontend-pdb.
Set the min-availability of your deployment to 1.
Now, you can apply your team's update. They've changed the file used for the home page's banner and provided you an updated docker image:
Edit your frontend deployment and change its image to the updated one.
While editing your deployment, change the ImagePullPolicy to Always.
Click Check my progress to verify the objective.
A marketing campaign is coming up that will cause a traffic surge on the OnlineBoutique shop. Normally, you would spin up extra resources in advance to handle the estimated traffic spike. However, if the traffic spike is larger than anticipated, you may get woken up in the middle of the night to spin up more resources to handle the load.
You also want to avoid running extra resources for any longer than necessary. To both lower costs and save yourself a potential headache, you can configure the Kubernetes deployments to scale automatically when the load begins to spike.
Apply horizontal pod autoscaling to your frontend deployment in order to handle the traffic surge.
Scale based on a target cpu percentage of 50.
Set the pod scaling between 1 minimum and
Of course, you want to make sure that users won’t experience downtime while the deployment is scaling.
To make sure the scaling action occurs without downtime, set the deployment to scale with a target cpu percentage of 50%. This should allow plenty of space to handle the load as the autoscaling occurs.
Set the deployment to scale between 1 minimum and
But what if the spike exceeds the compute resources you currently have provisioned? You may need to add additional compute nodes.
Next, ensure that your cluster is able to automatically spin up additional compute nodes if necessary. However, handling scaling up isn’t the only case you can handle with autoscaling.
Thinking ahead, you configure both a minimum number of nodes, and a maximum number of nodes. This way, the cluster can add nodes when traffic is high, and reduce the number of nodes when traffic is low.
Update your cluster autoscaler to scale between 1 node minimum and 6 nodes maximum.
Click Check my progress to verify the objective.
Fortunately, OnlineBoutique was designed with built-in load generation. Currently, your dev instance is simulating traffic on the store with ~10 concurrent users.
loadgenerator
pod with a higher number of concurrent users with this command. Replace YOUR_FRONTEND_EXTERNAL_IP with the IP of the frontend-external service:You should see your recommendationservice
crashing or, at least, heavily struggling from the increased demand.
While applying horizontal pod autoscaling to your frontend service keeps your application available during the load test, if you monitor your other workloads, you'll notice that some of them are being pushed heavily for certain resources.
If you still have time left in the lab, inspect some of your other workloads and try to optimize them by applying autoscaling towards the proper resource metric.
You can also see if it would be possible to further optimize your resource utilization with Node Auto Provisioning.
Congratulations! In this lab, you have successfully deployed the OnlineBoutique app to Google Kubernetes Engine and made some configuration changes that have been recommended for cost optimization. You have also applied horizontal pod autoscaling to your frontend and recommendationservice deployments to handle the traffic surge. You have also optimized other services by applying autoscaling towards the proper resource metric.
This self-paced lab is part of the Optimize Costs for Google Kubernetes Engine skill badge course. Completing this skill badge course earns you the badge above, to recognize your achievement. Share your badge on your resume and social platforms, and announce your accomplishment using #GoogleCloudBadge.
...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.
Manual Last Updated April 29, 2024
Lab Last Tested April 29, 2024
Copyright 2025 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.
This content is not currently available
We will notify you via email when it becomes available
Great!
We will contact you via email if it becomes available
One lab at a time
Confirm to end all existing labs and start this one