In this lab, you explore the basics of using deployment manifests. Manifests are files that contain configurations required for a deployment that can be used across different Pods. Manifests are easy to change.
Objectives
In this lab, you learn how to perform the following tasks:
Create deployment manifests, deploy to cluster, and verify Pod rescheduling as nodes are disabled.
Trigger manual scaling up and down of Pods in deployments.
Trigger deployment rollout (rolling update to new version) and rollbacks.
Perform a Canary deployment.
Lab setup
Access the lab
For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.
Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method.
On the left is the Lab Details panel with the following:
The Open Google Cloud console button
Time remaining
The temporary credentials that you must use for this lab
Other information, if needed, to step through this lab
Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).
The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
Note: If you see the Choose an account dialog, click Use Another Account.
If necessary, copy the Username below and paste it into the Sign in dialog.
{{{user_0.username | "Username"}}}
You can also find the Username in the Lab Details panel.
Click Next.
Copy the Password below and paste it into the Welcome dialog.
{{{user_0.password | "Password"}}}
You can also find the Password in the Lab Details panel.
Click Next.
Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials.
Note: Using your own Google Cloud account for this lab may incur extra charges.
Click through the subsequent pages:
Accept the terms and conditions.
Do not add recovery options or two-factor authentication (because this is a temporary account).
Do not sign up for free trials.
After a few moments, the Google Cloud console opens in this tab.
Note: To view a menu with a list of Google Cloud products and services, click the Navigation menu at the top-left, or type the service or product name in the Search field.
After you complete the initial sign-in steps, the project dashboard appears.
Activate Google Cloud Shell
Google Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud.
Google Cloud Shell provides command-line access to your Google Cloud resources.
In Cloud console, on the top right toolbar, click the Open Cloud Shell button.
Click Continue.
It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
You can list the active account name with this command:
You will create a deployment using a sample deployment manifest called nginx-deployment.yaml. This deployment is configured to run three Pod replicas with a single nginx container in each Pod listening on TCP port 80.
Create and open a file called nginx-deployment.yaml with nano using the following command:
nano nginx-deployment.yaml
Once nano has opened, paste the following into the nginx-deployment.yaml file:
Press Ctrl+O, and then press Enter to save your edited file.
Press Ctrl+X to exit the nano text editor.
To deploy your manifest, execute the following command:
kubectl apply -f ./nginx-deployment.yaml
To view a list of deployments, execute the following command:
kubectl get deployments
The output should look like this example.
Output:
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 0/3 3 0 3s
Wait a few minutes, and repeat the command until the number listed for CURRENT deployments reported by the command matches the number of DESIRED deployments.
The final output should look like the example.
Output:
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 42s
Click Check my progress to verify the objective.
Create and deploy manifest nginx deployment
Task 2. Manually scale up and down the number of Pods in deployments
Sometimes, you want to shut down a Pod instance. Other times, you want ten Pods running. In Kubernetes, you can scale a specific Pod to the desired number of instances. To shut them down, you scale to zero.
In this task, you scale Pods up and down in the Google Cloud console and Cloud Shell.
Scale Pods up and down in the console
Switch to the Google Cloud console tab.
In the Navigation menu ( ), click Kubernetes Engine > Workloads.
Click nginx-deployment (your deployment) to open the Deployment details page.
At the top, click ACTIONS > Scale > Edit Replicas.
Type 1 and click SCALE.
This action scales down your cluster. You should see the Pod status being updated under Managed Pods. You might have to click Refresh.
Scale Pods up and down in the shell
Switch back to the Cloud Shell browser tab.
In the Cloud Shell, to view a list of Pods in the deployments, execute the following command:
kubectl get deployments
Output:
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 1/1 1 1 3m
To scale the Pod back up to three replicas, execute the following command:
To view a list of Pods in the deployments, execute the following command:
kubectl get deployments
Output:
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 4m
Task 3. Trigger a deployment rollout and a deployment rollback
A deployment's rollout is triggered if and only if the deployment's Pod template (that is, .spec.template) is changed, for example, if the labels or container images of the template are updated. Other updates, such as scaling the deployment, do not trigger a rollout.
In this task, you trigger deployment rollout, and then you trigger deployment rollback.
Trigger a deployment rollout
To update the version of nginx in the deployment, execute the following command:
kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1
This updates the container image in your Deployment to nginx v1.9.1.
To annotate the rollout with details on the change, execute the following command:
kubectl annotate deployment nginx-deployment kubernetes.io/change-cause="version change to 1.9.1" --overwrite=true
To view the rollout status, execute the following command:
kubectl rollout status deployment.v1.apps/nginx-deployment
The output should look like the example.
Output:
Waiting for rollout to finish: 1 out of 3 new replicas updated...
Waiting for rollout to finish: 1 out of 3 new replicas updated...
Waiting for rollout to finish: 1 out of 3 new replicas updated...
Waiting for rollout to finish: 2 out of 3 new replicas updated...
Waiting for rollout to finish: 2 out of 3 new replicas updated...
Waiting for rollout to finish: 2 out of 3 new replicas updated...
Waiting for rollout to finish: 1 old replicas pending termination...
Waiting for rollout to finish: 1 old replicas pending termination...
deployment "nginx-deployment" successfully rolled out
To verify the change, get the list of deployments:
kubectl get deployments
The output should look like the example.
Output:
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 6m
Click Check my progress to verify the objective.
Update version of nginx in the deployment
View the rollout history of the deployment:
kubectl rollout history deployment nginx-deployment
The output should look like the example. Your output might not be an exact match.
Output:
deployments "nginx-deployment"
REVISION CHANGE-CAUSE
1
2 version change to 1.9.1
Trigger a deployment rollback
To roll back an object's rollout, you can use the kubectl rollout undo command.
To roll back to the previous version of the nginx deployment, execute the following command:
kubectl rollout undo deployments nginx-deployment
View the updated rollout history of the deployment:
kubectl rollout history deployment nginx-deployment
The output should look like the example. Your output might not be an exact match.
Output:
deployments "nginx-deployment"
REVISION CHANGE-CAUSE
2 version change to 1.9.1
3 Note: The most recent update is blank for the CHANGE-CAUSE as we did not use the kubectl annotate command .
View the details of the latest deployment revision:
kubectl rollout history deployment/nginx-deployment --revision=3
The output should look like the example. Your output might not be an exact match but it will show that the current revision has rolled back to nginx:1.7.9.
In this task, you create and verify a service that controls inbound traffic to an application. Services can be configured as ClusterIP, NodePort or LoadBalancer types. In this lab, you configure a LoadBalancer.
Define service types in the manifest
A manifest file called service-nginx.yaml that deploys a LoadBalancer service type has been provided for you. This service is configured to distribute inbound traffic on TCP port 60000 to port 80 on any containers that have the label app: nginx.
Create and open a file called service-nginx.yaml with nano using the following command:
nano service-nginx.yaml
Once nano has opened, paste the following into the service-nginx.yaml file:
Press Ctrl+O, and then press Enter to save your edited file.
Press Ctrl+X to exit the nano text editor.
In the Cloud Shell, to deploy your manifest, execute the following command:
kubectl apply -f ./service-nginx.yaml
This manifest defines a service and applies it to Pods that correspond to the selector. In this case, the manifest is applied to the nginx container that you deployed in task 1. This service also applies to any other Pods with the app: nginx label, including any that are created after the service.
Verify the LoadBalancer creation
To view the details of the nginx service, execute the following command:
kubectl get service nginx
The output should look like the example.
Output:
NAME Type CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.X.X.X X.X.X.X 60000:31798/TCP 1m
When the external IP appears, open http://[EXTERNAL_IP]:60000/ in a new browser tab to see the server being served through network load balancing.
Note: It may take a few seconds before the ExternalIP field is populated for your service. This is normal. Just re-run the kubectl get services nginx command every few seconds until the field is populated.
Click Check my progress to verify the objective.
Deploy manifest file that deploys LoadBalancer service type
Task 5. Perform a canary deployment
A canary deployment is a separate deployment used to test a new version of your application. A single service targets both the canary and normal deployments. And it can direct a subset of users to the canary version to mitigate the risk of new releases.
In this task, you create a canary deployment to deploys a single pod running a newer version of nginx than your main deployment.
Create and open a file called nginx-canary.yaml with nano using the following command:
nano nginx-canary.yaml
Once nano has opened, paste the following into the nginx-canary.yaml file:
Press Ctrl+O, and then press Enter to save your edited file.
Press Ctrl+X to exit the nano text editor.
The manifest for the nginx Service you deployed in the previous task uses a label selector to target the Pods with the app: nginx label. Both the normal deployment and this new canary deployment have the app: nginx label. Inbound connections will be distributed by the service to both the normal and canary deployment Pods. The canary deployment has fewer replicas (Pods) than the normal deployment, and thus it is available to fewer users than the normal deployment.
Create the canary deployment based on the configuration file:
kubectl apply -f ./nginx-canary.yaml
When the deployment is complete, verify that both the nginx and the nginx-canary deployments are present:
kubectl get deployments
Switch back to the browser tab that is connected to the external LoadBalancer service ip and refresh the page. You should continue to see the standard Welcome to nginx page.
Switch back to the Cloud Shell and scale down the primary deployment to 0 replicas:
Verify that the only running replica is now the Canary deployment:
kubectl get deployments
Switch back to the browser tab that is connected to the external LoadBalancer service ip and refresh the page. You should continue to see the standard Welcome to nginx page showing that the Service is automatically balancing traffic to the canary deployment.
Click Check my progress to verify the objective.
Create a Canary Deployment
Session affinity
The service configuration used in the lab does not ensure that all requests from a single client will always connect to the same Pod. Each request is treated separately and can connect to either the normal nginx deployment or to the nginx-canary deployment.
This potential to switch between different versions may cause problems if there are significant changes in functionality in the canary release. To prevent this you can set the sessionAffinity field to ClientIP in the specification of the service if you need a client's first request to determine which Pod will be used for all subsequent connections.
When you have completed your lab, click End Lab. Google Cloud Skills Boost removes the resources you’ve used and cleans the account for you.
You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.
The number of stars indicates the following:
1 star = Very dissatisfied
2 stars = Dissatisfied
3 stars = Neutral
4 stars = Satisfied
5 stars = Very satisfied
You can close the dialog box if you don't want to provide feedback.
For feedback, suggestions, or corrections, please use the Support tab.
Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.
Labs erstellen ein Google Cloud-Projekt und Ressourcen für einen bestimmten Zeitraum
Labs haben ein Zeitlimit und keine Pausenfunktion. Wenn Sie das Lab beenden, müssen Sie von vorne beginnen.
Klicken Sie links oben auf dem Bildschirm auf Lab starten, um zu beginnen
Privates Surfen verwenden
Kopieren Sie den bereitgestellten Nutzernamen und das Passwort für das Lab
Klicken Sie im privaten Modus auf Konsole öffnen
In der Konsole anmelden
Melden Sie sich mit Ihren Lab-Anmeldedaten an. Wenn Sie andere Anmeldedaten verwenden, kann dies zu Fehlern führen oder es fallen Kosten an.
Akzeptieren Sie die Nutzungsbedingungen und überspringen Sie die Seite zur Wiederherstellung der Ressourcen
Klicken Sie erst auf Lab beenden, wenn Sie das Lab abgeschlossen haben oder es neu starten möchten. Andernfalls werden Ihre bisherige Arbeit und das Projekt gelöscht.
Diese Inhalte sind derzeit nicht verfügbar
Bei Verfügbarkeit des Labs benachrichtigen wir Sie per E-Mail
Sehr gut!
Bei Verfügbarkeit kontaktieren wir Sie per E-Mail
Es ist immer nur ein Lab möglich
Bestätigen Sie, dass Sie alle vorhandenen Labs beenden und dieses Lab starten möchten
Privates Surfen für das Lab verwenden
Nutzen Sie den privaten oder Inkognitomodus, um dieses Lab durchzuführen. So wird verhindert, dass es zu Konflikten zwischen Ihrem persönlichen Konto und dem Teilnehmerkonto kommt und zusätzliche Gebühren für Ihr persönliches Konto erhoben werden.
Architecting with Google Kubernetes Engine: Creating Kubernetes Engine Deployments