Wird geladen…
Keine Ergebnisse gefunden.

Google Cloud Skills Boost

Ihre Kompetenzen in der Google Cloud Console anwenden

03

Architecting with Google Kubernetes Engine: Workloads

Zugriff auf über 700 Labs und Kurse nutzen

Creating Google Kubernetes Engine Deployments

Lab 1 Stunde universal_currency_alt 5 Guthabenpunkte show_chart Einsteiger
info Dieses Lab kann KI-Tools enthalten, die den Lernprozess unterstützen.
Zugriff auf über 700 Labs und Kurse nutzen

Overview

In this lab, you explore the basics of using deployment manifests. Manifests are files that contain configurations required for a deployment that can be used across different Pods. Manifests are easy to change.

Objectives

In this lab, you learn how to perform the following tasks:

  • Create deployment manifests, deploy to cluster, and verify Pod rescheduling as nodes are disabled.
  • Trigger manual scaling up and down of Pods in deployments.
  • Trigger deployment rollout (rolling update to new version) and rollbacks.
  • Perform a Canary deployment.

Lab setup

Access the lab

For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.

  1. Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:

    • The Open Google Cloud console button
    • Time remaining
    • The temporary credentials that you must use for this lab
    • Other information, if needed, to step through this lab
  2. Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).

    The lab spins up resources, and then opens another tab that shows the Sign in page.

    Tip: Arrange the tabs in separate windows, side-by-side.

    Note: If you see the Choose an account dialog, click Use Another Account.
  3. If necessary, copy the Username below and paste it into the Sign in dialog.

    {{{user_0.username | "Username"}}}

    You can also find the Username in the Lab Details panel.

  4. Click Next.

  5. Copy the Password below and paste it into the Welcome dialog.

    {{{user_0.password | "Password"}}}

    You can also find the Password in the Lab Details panel.

  6. Click Next.

    Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials. Note: Using your own Google Cloud account for this lab may incur extra charges.
  7. Click through the subsequent pages:

    • Accept the terms and conditions.
    • Do not add recovery options or two-factor authentication (because this is a temporary account).
    • Do not sign up for free trials.

After a few moments, the Google Cloud console opens in this tab.

Note: To view a menu with a list of Google Cloud products and services, click the Navigation menu at the top-left, or type the service or product name in the Search field.

After you complete the initial sign-in steps, the project dashboard appears.

Activate Google Cloud Shell

Google Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud.

Google Cloud Shell provides command-line access to your Google Cloud resources.

  1. In Cloud console, on the top right toolbar, click the Open Cloud Shell button.

  2. Click Continue.

It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

  • You can list the active account name with this command:
gcloud auth list

Output:

Credentialed accounts: - @.com (active)

Example output:

Credentialed accounts: - google1623327_student@qwiklabs.net
  • You can list the project ID with this command:
gcloud config list project

Output:

[core] project =

Example output:

[core] project = qwiklabs-gcp-44776a13dea667a6 Note: Full documentation of gcloud is available in the gcloud CLI overview guide .

Task 1. Create deployment manifests and deploy to the cluster

In this task, you create a deployment manifest for a Pod inside the cluster.

Connect to the lab GKE cluster

  1. In Cloud Shell, type the following command to set the environment variable for the zone and cluster name:
export my_region={{{ project_0.default_region | REGION }}} export my_cluster=autopilot-cluster-1
  1. Configure kubectl tab completion in Cloud Shell:
source <(kubectl completion bash)
  1. In Cloud Shell, configure access to your cluster for the kubectl command-line tool, using the following command:
gcloud container clusters get-credentials $my_cluster --region $my_region

Create a deployment manifest

You will create a deployment using a sample deployment manifest called nginx-deployment.yaml. This deployment is configured to run three Pod replicas with a single nginx container in each Pod listening on TCP port 80.

  1. Create and open a file called nginx-deployment.yaml with nano using the following command:
nano nginx-deployment.yaml
  1. Once nano has opened, paste the following into the nginx-deployment.yaml file:
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80
  1. Press Ctrl+O, and then press Enter to save your edited file.

  2. Press Ctrl+X to exit the nano text editor.

  3. To deploy your manifest, execute the following command:

kubectl apply -f ./nginx-deployment.yaml
  1. To view a list of deployments, execute the following command:
kubectl get deployments

The output should look like this example.

Output:

NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 0/3 3 0 3s
  1. Wait a few minutes, and repeat the command until the number listed for CURRENT deployments reported by the command matches the number of DESIRED deployments.

The final output should look like the example.

Output:

NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 3/3 3 3 42s

Click Check my progress to verify the objective. Create and deploy manifest nginx deployment

Task 2. Manually scale up and down the number of Pods in deployments

Sometimes, you want to shut down a Pod instance. Other times, you want ten Pods running. In Kubernetes, you can scale a specific Pod to the desired number of instances. To shut them down, you scale to zero.

In this task, you scale Pods up and down in the Google Cloud console and Cloud Shell.

Scale Pods up and down in the console

  1. Switch to the Google Cloud console tab.
  2. In the Navigation menu ( ), click Kubernetes Engine > Workloads.
  3. Click nginx-deployment (your deployment) to open the Deployment details page.
  4. At the top, click ACTIONS > Scale > Edit Replicas.
  5. Type 1 and click SCALE.

This action scales down your cluster. You should see the Pod status being updated under Managed Pods. You might have to click Refresh.

Scale Pods up and down in the shell

  1. Switch back to the Cloud Shell browser tab.
  2. In the Cloud Shell, to view a list of Pods in the deployments, execute the following command:
kubectl get deployments

Output:

NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 1/1 1 1 3m
  1. To scale the Pod back up to three replicas, execute the following command:
kubectl scale --replicas=3 deployment nginx-deployment
  1. To view a list of Pods in the deployments, execute the following command:
kubectl get deployments

Output:

NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 3/3 3 3 4m

Task 3. Trigger a deployment rollout and a deployment rollback

A deployment's rollout is triggered if and only if the deployment's Pod template (that is, .spec.template) is changed, for example, if the labels or container images of the template are updated. Other updates, such as scaling the deployment, do not trigger a rollout.

In this task, you trigger deployment rollout, and then you trigger deployment rollback.

Trigger a deployment rollout

  1. To update the version of nginx in the deployment, execute the following command:
kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1

This updates the container image in your Deployment to nginx v1.9.1.

  1. To annotate the rollout with details on the change, execute the following command:
kubectl annotate deployment nginx-deployment kubernetes.io/change-cause="version change to 1.9.1" --overwrite=true
  1. To view the rollout status, execute the following command:
kubectl rollout status deployment.v1.apps/nginx-deployment

The output should look like the example.

Output:

Waiting for rollout to finish: 1 out of 3 new replicas updated... Waiting for rollout to finish: 1 out of 3 new replicas updated... Waiting for rollout to finish: 1 out of 3 new replicas updated... Waiting for rollout to finish: 2 out of 3 new replicas updated... Waiting for rollout to finish: 2 out of 3 new replicas updated... Waiting for rollout to finish: 2 out of 3 new replicas updated... Waiting for rollout to finish: 1 old replicas pending termination... Waiting for rollout to finish: 1 old replicas pending termination... deployment "nginx-deployment" successfully rolled out
  1. To verify the change, get the list of deployments:
kubectl get deployments

The output should look like the example.

Output:

NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 3/3 3 3 6m

Click Check my progress to verify the objective. Update version of nginx in the deployment

  1. View the rollout history of the deployment:
kubectl rollout history deployment nginx-deployment

The output should look like the example. Your output might not be an exact match.

Output:

deployments "nginx-deployment" REVISION CHANGE-CAUSE 1 2 version change to 1.9.1

Trigger a deployment rollback

To roll back an object's rollout, you can use the kubectl rollout undo command.

  1. To roll back to the previous version of the nginx deployment, execute the following command:
kubectl rollout undo deployments nginx-deployment
  1. View the updated rollout history of the deployment:
kubectl rollout history deployment nginx-deployment

The output should look like the example. Your output might not be an exact match.

Output:

deployments "nginx-deployment" REVISION CHANGE-CAUSE 2 version change to 1.9.1 3 Note: The most recent update is blank for the CHANGE-CAUSE as we did not use the kubectl annotate command .
  1. View the details of the latest deployment revision:
kubectl rollout history deployment/nginx-deployment --revision=3

The output should look like the example. Your output might not be an exact match but it will show that the current revision has rolled back to nginx:1.7.9.

Output:

deployments "nginx-deployment" with revision #3 Pod Template: Labels: app=nginx pod-template-hash=3123191453 Containers: nginx: Image: nginx:1.7.9 Port: 80/TCP Host Port: 0/TCP Environment: Mounts: Volumes:

Task 4. Define the service type in the manifest

In this task, you create and verify a service that controls inbound traffic to an application. Services can be configured as ClusterIP, NodePort or LoadBalancer types. In this lab, you configure a LoadBalancer.

Define service types in the manifest

A manifest file called service-nginx.yaml that deploys a LoadBalancer service type has been provided for you. This service is configured to distribute inbound traffic on TCP port 60000 to port 80 on any containers that have the label app: nginx.

  1. Create and open a file called service-nginx.yaml with nano using the following command:
nano service-nginx.yaml
  1. Once nano has opened, paste the following into the service-nginx.yaml file:
apiVersion: v1 kind: Service metadata: name: nginx spec: type: LoadBalancer selector: app: nginx ports: - protocol: TCP port: 60000 targetPort: 80
  1. Press Ctrl+O, and then press Enter to save your edited file.

  2. Press Ctrl+X to exit the nano text editor.

  3. In the Cloud Shell, to deploy your manifest, execute the following command:

kubectl apply -f ./service-nginx.yaml

This manifest defines a service and applies it to Pods that correspond to the selector. In this case, the manifest is applied to the nginx container that you deployed in task 1. This service also applies to any other Pods with the app: nginx label, including any that are created after the service.

Verify the LoadBalancer creation

  1. To view the details of the nginx service, execute the following command:
kubectl get service nginx

The output should look like the example.

Output:

NAME Type CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx LoadBalancer 10.X.X.X X.X.X.X 60000:31798/TCP 1m
  1. When the external IP appears, open http://[EXTERNAL_IP]:60000/ in a new browser tab to see the server being served through network load balancing.
Note: It may take a few seconds before the ExternalIP field is populated for your service. This is normal. Just re-run the kubectl get services nginx command every few seconds until the field is populated.

Click Check my progress to verify the objective. Deploy manifest file that deploys LoadBalancer service type

Task 5. Perform a canary deployment

A canary deployment is a separate deployment used to test a new version of your application. A single service targets both the canary and normal deployments. And it can direct a subset of users to the canary version to mitigate the risk of new releases.

In this task, you create a canary deployment to deploys a single pod running a newer version of nginx than your main deployment.

  1. Create and open a file called nginx-canary.yaml with nano using the following command:
nano nginx-canary.yaml
  1. Once nano has opened, paste the following into the nginx-canary.yaml file:
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-canary labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx track: canary Version: 1.9.1 spec: containers: - name: nginx image: nginx:1.9.1 ports: - containerPort: 80
  1. Press Ctrl+O, and then press Enter to save your edited file.

  2. Press Ctrl+X to exit the nano text editor.

The manifest for the nginx Service you deployed in the previous task uses a label selector to target the Pods with the app: nginx label. Both the normal deployment and this new canary deployment have the app: nginx label. Inbound connections will be distributed by the service to both the normal and canary deployment Pods. The canary deployment has fewer replicas (Pods) than the normal deployment, and thus it is available to fewer users than the normal deployment.

  1. Create the canary deployment based on the configuration file:
kubectl apply -f ./nginx-canary.yaml
  1. When the deployment is complete, verify that both the nginx and the nginx-canary deployments are present:
kubectl get deployments
  1. Switch back to the browser tab that is connected to the external LoadBalancer service ip and refresh the page. You should continue to see the standard Welcome to nginx page.
  2. Switch back to the Cloud Shell and scale down the primary deployment to 0 replicas:
kubectl scale --replicas=0 deployment nginx-deployment
  1. Verify that the only running replica is now the Canary deployment:
kubectl get deployments
  1. Switch back to the browser tab that is connected to the external LoadBalancer service ip and refresh the page. You should continue to see the standard Welcome to nginx page showing that the Service is automatically balancing traffic to the canary deployment.

Click Check my progress to verify the objective. Create a Canary Deployment

Session affinity

The service configuration used in the lab does not ensure that all requests from a single client will always connect to the same Pod. Each request is treated separately and can connect to either the normal nginx deployment or to the nginx-canary deployment.

This potential to switch between different versions may cause problems if there are significant changes in functionality in the canary release. To prevent this you can set the sessionAffinity field to ClientIP in the specification of the service if you need a client's first request to determine which Pod will be used for all subsequent connections.

For example:

apiVersion: v1 kind: Service metadata: name: nginx spec: type: LoadBalancer sessionAffinity: ClientIP selector: app: nginx ports: - protocol: TCP port: 60000 targetPort: 80

End your lab

When you have completed your lab, click End Lab. Google Cloud Skills Boost removes the resources you’ve used and cleans the account for you.

You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.

The number of stars indicates the following:

  • 1 star = Very dissatisfied
  • 2 stars = Dissatisfied
  • 3 stars = Neutral
  • 4 stars = Satisfied
  • 5 stars = Very satisfied

You can close the dialog box if you don't want to provide feedback.

For feedback, suggestions, or corrections, please use the Support tab.

Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

Zurück Weiter

Vorbereitung

  1. Labs erstellen ein Google Cloud-Projekt und Ressourcen für einen bestimmten Zeitraum
  2. Labs haben ein Zeitlimit und keine Pausenfunktion. Wenn Sie das Lab beenden, müssen Sie von vorne beginnen.
  3. Klicken Sie links oben auf dem Bildschirm auf Lab starten, um zu beginnen

Diese Inhalte sind derzeit nicht verfügbar

Bei Verfügbarkeit des Labs benachrichtigen wir Sie per E-Mail

Sehr gut!

Bei Verfügbarkeit kontaktieren wir Sie per E-Mail

Es ist immer nur ein Lab möglich

Bestätigen Sie, dass Sie alle vorhandenen Labs beenden und dieses Lab starten möchten

Privates Surfen für das Lab verwenden

Nutzen Sie den privaten oder Inkognitomodus, um dieses Lab durchzuführen. So wird verhindert, dass es zu Konflikten zwischen Ihrem persönlichen Konto und dem Teilnehmerkonto kommt und zusätzliche Gebühren für Ihr persönliches Konto erhoben werden.
Vorschau