arrow_back

Managing Traffic Flow with CSM

登录 加入
访问 700 多个实验和课程

Managing Traffic Flow with CSM

实验 1 小时 30 分钟 universal_currency_alt 5 积分 show_chart 中级
info 此实验可能会提供 AI 工具来支持您学习。
访问 700 多个实验和课程

Overview

A Cloud Service mesh is an architecture that enables managed, observable, and secure communication among your services, making it easier for you to create robust enterprise applications made up of many microservices on your chosen infrastructure. It manage the common requirements of running a service, such as monitoring, networking, and security, with consistent, powerful tools, making it easier for service developers and operators to focus on creating and managing great applications for their users.

Cloud Service Mesh’s traffic management model relies on the following two components:

  • Control plane: manages and configures the Envoy proxies to route traffic and enforce policies.
  • Data plane: encompasses all network communication between microservices performed at runtime by the Envoy proxies.

These components enable mesh traffic management features including:

  • Service discovery
  • Load balancing
  • Traffic routing and control

Objectives

In this lab, you learn how to perform the following tasks:

  • Configure and use Istio Gateways
  • Apply default destination rules, for all available versions
  • Apply virtual services to route by default to only one version
  • Route to a specific version of a service based on user identity
  • Shift traffic gradually from one version of a microservice to another
  • Use the Cloud Service Mesh dashboard to view routing to multiple versions
  • Setup networking best practices such as retries, circuit breakers and timeouts

Setup and requirements

In this task, you use Qwiklabs and perform initialization steps for your lab.

For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.

  1. Sign in to Qwiklabs using an incognito window.

  2. Note the lab's access time (for example, 1:15:00), and make sure you can finish within that time.
    There is no pause feature. You can restart if needed, but you have to start at the beginning.

  3. When ready, click Start lab.

  4. Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.

  5. Click Open Google Console.

  6. Click Use another account and copy/paste credentials for this lab into the prompts.
    If you use other credentials, you'll receive errors or incur charges.

  7. Accept the terms and skip the recovery resource page.

After you complete the initial sign-in steps, the project dashboard appears.

  1. Click Select a project, highlight your GCP Project ID, and click OPEN to select your project.

Activate Google Cloud Shell

Google Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud.

Google Cloud Shell provides command-line access to your Google Cloud resources.

  1. In Cloud console, on the top right toolbar, click the Open Cloud Shell button.

  2. Click Continue.

It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

  • You can list the active account name with this command:
gcloud auth list

Output:

Credentialed accounts: - @.com (active)

Example output:

Credentialed accounts: - google1623327_student@qwiklabs.net
  • You can list the project ID with this command:
gcloud config list project

Output:

[core] project =

Example output:

[core] project = qwiklabs-gcp-44776a13dea667a6 Note: Full documentation of gcloud is available in the gcloud CLI overview guide .

Task 1. Review Traffic Management use cases

Different traffic management capabilities are enabled by using different configuration options.

Example: traffic splitting

Route traffic to multiple versions of a service.

Example: timeouts

Set a timeout, the amount of time Istio waits for a response to a request. The timeout for HTTP requests is 15 seconds, but it can be overridden.

Example: retries

A retry is an attempt to complete an operation multiple times if it fails. Adjust the maximum number of retry attempts, or the number of attempts possible within the default or overridden timeout period.

Example: fault injection: inserting delays

Fault injection is a testing method that introduces errors into a system to ensure that it can withstand and recover from error conditions.

This example introduces a 5 second delay in 10% of the requests to the "v1" version of the ratings microservice.

Example: fault injection: inserting aborts

The above example returns an HTTP 400 error code for 10% of the requests to the ratings service "v1".

Example: conditional routing: based on source labels

A rule can indicate that it only applies to calls from workloads (pods) implementing the version v2 of the reviews service.

Example: conditional routing: based on request headers

The above rule only applies to an incoming request if it includes a custom "end-user" header that contains the string “atharvak”.

Task 2. Complete lab setup

This lab environment has already been partially configured.

  • A GKE cluster named gke was created.
  • Cloud Service Mesh has been installed
  • The Bookinfo multi-service sample application was deployed.

Configure cluster access for kubectl

  1. Set the Zone environment variable:

    CLUSTER_ZONE={{{ project_0.default_zone| "Zone added at lab start" }}}
  2. In Cloud Shell, set environment variables cluster name:

    export CLUSTER_NAME=gke
  3. Configure kubectl command line access by running:

    export GCLOUD_PROJECT=$(gcloud config get-value project) gcloud container clusters get-credentials $CLUSTER_NAME \ --zone $CLUSTER_ZONE --project $GCLOUD_PROJECT

    If prompted, click the Authorize button.

Verify cluster and Cloud Service Mesh installation

  1. Check that your cluster is up and running:

    gcloud container clusters list

    Output:

    NAME: gke LOCATION: {{{ project_0.default_zone| "Zone" }}} MASTER_VERSION: 1.24.8-gke.2000 MASTER_IP: 35.222.150.207 MACHINE_TYPE: e2-standard-4 NODE_VERSION: 1.24.8-gke.2000 NUM_NODES: 2 STATUS: RUNNING
  2. Ensure the Kubernetes pods for the Cloud Service Mesh control plane are deployed:

    kubectl get pods -n asm-ingress

Output:

NAME READY STATUS RESTARTS AGE istio-ingressgateway-69fc5475fd-4wglw 1/1 Running 0 22m istio-ingressgateway-69fc5475fd-stb7x 1/1 Running 0 22m istio-ingressgateway-69fc5475fd-vkxp4 1/1 Running 0 22m

Pod status should be Running or Completed.

  1. Ensure corresponding Kubernetes services for the Cloud Service Mesh control plane are deployed:

    kubectl get service -n asm-ingress

Output:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway LoadBalancer 34.118.232.124 34.75.207.190 15021:32645/TCP,80:31091/TCP,443:32092/TCP 30m

Verify the Bookinfo deployment

  1. Confirm that the application has been deployed correctly:

    kubectl get pods

    Output:

    NAME READY STATUS details-v1-1520924117-48z17 2/2 Running productpage-v1-560495357-jk1lz 2/2 Running ratings-v1-734492171-rnr5l 2/2 Running reviews-v1-874083890-f0qf0 2/2 Running reviews-v2-1343845940-b34q5 2/2 Running reviews-v3-1813607990-8ch52 2/2 Running Note: See how each pod has two containers? That's the application container and the Istio sidecar proxy.
  2. Review running application services:

    kubectl get services

    Output:

    NAME TYPE CLUSTER-IP EXTERNAL-IP ... details ClusterIP 10.7.248.49 kubernetes ClusterIP 10.7.240.1 productpage ClusterIP 10.7.248.22 ratings ClusterIP 10.7.247.26 reviews ClusterIP 10.7.246.22

Task 3. Install Gateways to enable ingress

In a Kubernetes environment, the Kubernetes Ingress Resource is used to specify services that should be exposed outside the cluster. In Cloud Service Mesh, a better approach, which also works in Kubernetes and other environments, is to use a Gateway resource. A Gateway allows mesh features such as monitoring, mTLS, and advanced routing capabilities rules to be applied to traffic entering the cluster.

Gateways overcome Kubernetes Ingress shortcomings by separating the L4-L6 spec from L7. The Gateway configures the L4-L6 functions, such as the ports to expose, or the protocol to use. Then service owners bind VirtualService to configure L7 traffic routing options, such as routing based on paths, headers, weights, etc.

There are two options for deploying gateways, either shared or dedicated. Shared gateways use a single centralized gateway is that used by many applications, possibly across many namespaces. In the example below, the Gateway in the ingress namespace delegates ownership of routes to application namespaces, but retains control over TLS configuration. This works well when using shared TLS certificates or shared infrastructure. In this lab we will use this option.

Dedicated gateways give full control and ownership to a single namespace, since an application namespace has its own dedicated gateway. This works well for applications that require isolation for security or performance.

Install an ingress gateway in your cluster

  1. Create a namespace for the gateway:

    kubectl create namespace ingress
  2. Label the gateway namespace with a revision label for auto-injection:

    kubectl label namespace ingress istio.io/rev=asm-managed --overwrite

    The revision label is used by the sidecar injector webhook to associate injected proxies with a particular control plane revision.

    You can ignore the message "istio-injection not found" in the output. That means that the namespace didn't previously have the istio-injection label, which you should expect in new installations of Service Mesh or new deployments.

    Because auto-injection fails if a namespace has both the istio-injection and the revision label, all kubectl label commands in the Service Mesh documentation includes removing the istio-injection label.

  3. Download and apply the gateway configuration files. These include the pods and services that will first receive the incoming requests from outside the cluster:

cat <<'EOF' > ingress.yaml apiVersion: v1 kind: ServiceAccount metadata: name: istio-ingressgateway namespace: ingress --- apiVersion: v1 kind: Service metadata: name: istio-ingressgateway namespace: ingress labels: app: istio-ingressgateway istio: ingressgateway spec: ports: # status-port exposes a /healthz/ready endpoint that can be used with GKE Ingress health checks - name: status-port port: 15021 protocol: TCP targetPort: 15021 # Any ports exposed in Gateway resources should be exposed here. - name: http2 port: 80 - name: https port: 443 selector: istio: ingressgateway app: istio-ingressgateway type: LoadBalancer --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: istio-ingressgateway namespace: ingress rules: - apiGroups: - "" resources: - secrets verbs: - get - watch - list --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: istio-ingressgateway namespace: ingress roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: istio-ingressgateway subjects: - kind: ServiceAccount name: istio-ingressgateway --- apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: istio-ingressgateway namespace: ingress spec: maxUnavailable: 1 selector: matchLabels: istio: ingressgateway app: istio-ingressgateway --- apiVersion: apps/v1 kind: Deployment metadata: name: istio-ingressgateway namespace: ingress spec: replicas: 1 selector: matchLabels: app: istio-ingressgateway istio: ingressgateway template: metadata: annotations: # This is required to inject the gateway with the # required configuration. inject.istio.io/templates: gateway labels: app: istio-ingressgateway istio: ingressgateway spec: containers: - name: istio-proxy image: auto # The image will automatically update each time the pod starts. serviceAccountName: istio-ingressgateway --- apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: istio-ingressgateway namespace: ingress spec: maxReplicas: 5 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 80 minReplicas: 3 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: istio-ingressgateway EOF kubectl apply -n ingress -f ingress.yaml
  1. After you create the deployment, verify that the new services are working:

    kubectl get pod,service -n ingress

    Notice the resource is a LoadBalancer. This ingress gateway uses an external TCP load balancer in GCP.

  2. Deploy the Gateway to specify the port and protocol to be used. In this case, the gateway enables HTTP traffic over port 80:

    cat <<EOF | kubectl apply -f - apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway namespace: ingress spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" EOF

    The Gateway resource must be located in the same namespace as the gateway deployment.

  3. Deploy the VirtualService to route traffic from the gateway pods and service that you just created into the BookInfo application:

    cat <<EOF | kubectl apply -f - apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - "*" gateways: - ingress/bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080 EOF

    The VirtualService resource must be located in the same namespace as the application. Notice that it establishes the productpage service as the default destination.

  4. Verify that the Gateway and VirtualService have been created and notice that the VirtualService is pointing to the Gateway:

    kubectl get gateway,virtualservice
  5. Save this external IP in your Cloud Shell environment:

    export GATEWAY_URL=$(kubectl get svc -n ingress istio-ingressgateway \ -o=jsonpath='{.status.loadBalancer.ingress[0].ip}') echo The gateway address is $GATEWAY_URL
Note: If the gateway address is empty, wait 1-2 minutes and try the last command again. Do this until you have an address in your $GATEWAY_URL variable.

Generate some background traffic

Generate some background traffic against the application so that when you explore the Service Mesh dashboard, there's some interesting data to view.

  1. In Cloud Shell, install siege, a load generation tool:

    sudo apt install siege
  2. Use siege to create traffic against your services:

    siege http://${GATEWAY_URL}/productpage

Access the BookInfo application

  1. In Cloud Shell, open another tab by clicking on the + icon in the Cloud Shell menu bar.

  2. Set the Zone environment variable:

    CLUSTER_ZONE={{{ project_0.default_zone| "Zone added at lab start" }}}
  3. Initialize the new Cloud Shell tab:

    export CLUSTER_NAME=gke export GCLOUD_PROJECT=$(gcloud config get-value project) gcloud container clusters get-credentials $CLUSTER_NAME \ --zone $CLUSTER_ZONE --project $GCLOUD_PROJECT export GATEWAY_URL=$(kubectl get svc istio-ingressgateway \ -o=jsonpath='{.status.loadBalancer.ingress[0].ip}' -n ingress)
  4. Confirm that the Bookinfo application responds by sending a curl request to it from some pod, within the cluster, for example from ratings:

    kubectl exec -it \ $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') \ -c ratings -- curl productpage:9080/productpage \ | grep -o "<title>.*</title>"

    Output:

    <title>Simple Bookstore App</title>
  5. Check that the Bookinfo app responds to a curl request sent to it from outside the cluster, using the external IP saved earlier:

    curl -I http://${GATEWAY_URL}/productpage

    Output:

    HTTP/1.1 200 OK content-type: text/html; charset=utf-8 content-length: 5293 server: istio-envoy date: Wed, 01 Feb 2023 13:28:58 GMT x-envoy-upstream-service-time: 27
  6. Open the Bookinfo application in your browser. Run this command in the Cloud Shell to get the full URL:

    echo http://${GATEWAY_URL}/productpage

Congratulations! You exposed an HTTP endpoint for the Bookinfo productpage service to external traffic. The Gateway configuration resources allow external traffic to enter the service mesh and make the traffic management and policy features available for edge services.

Click Check my progress to verify the objective. Install Gateways to enable ingress.

Task 4. Use the Service Mesh dashboard view routing to multiple versions

There are a couple of items to note when it comes to viewing data in the Service Mesh dashboard.

The first is that, for most pages, it takes 1-2 minutes for the data to be available for display. That means that if you look at a page, it might not have the data you expect for 1-2 minutes. If you don't see the data you want, wait for a minute or so and then refresh the page.

The Topology page also has a big initial delay before data is shown. It can take up to 5+ minutes for the initial set of data to be available. If you see a message that there is no data, wait a bit and then refresh the page and return to the Topology view.

In the previous paragraphs, you are instructed to wait AND to refresh the page. As it turns out, not only is the data a bit delayed in arriving, but many pages won't show the available data without a page refresh. So if you expect the data to be available and you don't see it, make sure to do a refresh of the page in your browser.

View routing information in the Table View

  1. From the Navigation menu, select Kubernetes Engine > Features > Service Mesh.

Note: If the Topology view is not displayed, refresh the browser window.
  1. Click on the productpage service, then select Connected Services on the left.

  1. Select the Outbound tab and note the two services called by the productpage pods.

  1. Click on the reviews service.

  2. Note the service statistics, then select the Infrastructure link on the left-hand menu.

You can see that there are multiple pods, running different versions of the reviews logic, that receive traffic sent to the reviews service.

  1. Click on Traffic in the left-hand menu to see another view of traffic distribution.

    You can see that there is relatively even distribution of traffic across the three backend pods running the different versions of the application logic.

View routing information in the Topology View

  1. Click on the Service Mesh logo in the upper left corner to return to the main dashboard page.
Note: If you see an error message indicating that there is no data available to graph, or you see a chart that doesn't have all the traffic you expect, wait 1-2 minutes and try again.
  1. Rearrange the mesh graph so that you can easily view:

    • The productpage service going to productpage deployment
    • The productpage deployment going to reviews service
    • The reviews service going to three version of reviews

  2. Click on the reviews service node and see relative qps for each backend version.

Task 5. Apply default destination rules, for all available versions

In this task, you define all the available versions, called subsets, in destination rules.

  1. Review the configuration found in Github. This configuration defines 4 DestinationRule resources, 1 for each service.

  2. Apply the configuration with the following command in Cloud Shell:

    wget https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/networking/destination-rule-all.yaml sed -i 's#istio\.io/v1#istio\.io/v1alpha3#g' destination-rule-all.yaml kubectl apply -f destination-rule-all.yaml

    Output:

    destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created
  3. Check that 4 DestinationRule resources were defined.

    kubectl get destinationrules

    Output:

    NAME HOST AGE details details 1m productpage productpage 1m ratings ratings 1m reviews reviews 1m
  4. Review the details of the destination rules:

    kubectl get destinationrules -o yaml

    Notice that subsets are defined within the spec of a DestinationRule.

  5. Wait for 1-2 minutes, then return to the Service Mesh dashboard.

  6. Look in both the table and topology views and confirm that the traffic continues to be evenly distributed across the three backend versions. You can click SHOW TIMELINE to adjust the period of time that is being charted, making it easier to zero in on the data you are interested in.

Click Check my progress to verify the objective. Apply destination rules.

Task 6. Apply virtual services to route by default to only one version

In this task, you apply virtual services for each service that routes all traffic to v1 of the service workload.

  1. Review the configuration found in Github. This configuration defines 4 VirtualService resources, 1 for each service.

  2. Apply the configuration with the following command in Cloud Shell:

    wget https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/networking/virtual-service-all-v1.yaml sed -i 's#istio\.io/v1#istio\.io/v1alpha3#g' virtual-service-all-v1.yaml kubectl apply -f virtual-service-all-v1.yaml

    Output:

    virtualservice.networking.istio.io/productpage created virtualservice.networking.istio.io/reviews created virtualservice.networking.istio.io/ratings created virtualservice.networking.istio.io/details created

    Because configuration propagation is eventually consistent, wait a few seconds for the virtual services to take effect.

  3. Check that 4 routes, VirtualService resources, were defined:

    kubectl get virtualservices

    Output:

    NAME GATEWAYS HOSTS AGE bookinfo ["ingress/bookinfo-gateway"] ["*"] 19m details ["details"] 6s productpage ["productpage"] 7s ratings ["ratings"] 6s reviews ["reviews"] 7s
  4. In Cloud Shell, get the external IP address of the ingress gateway:

    echo $GATEWAY_URL
  5. Test the new routing configuration using the Bookinfo UI.

    • Open the Bookinfo site in your browser. The URL is http://[GATEWAY_URL]/productpage, where GATEWAY_URL is the External IP address of the ingress.

    • Refresh the page a few times to issue multiple requests.

    Notice that the Book Reviews part of the page displays with no rating stars, no matter how many times you refresh. This is because you configured the mesh to route all traffic for the reviews service to the version reviews:v1 and this version of the service does not access the star ratings service.

  6. Wait for 1-2 minutes, then return to the Service Mesh dashboard by navigating to Navigation menu > Kubernetes Engine > Service Mesh > reviews > Infrastructure.

  7. Select SHOW TIMELINE and focus the chart on the last 5 minutes of traffic. You should see that the traffic goes from being evenly distributed to being routed to the version 1 workload 100% of the time.

    You can also see the new traffic distribution by looking at the Traffic tab or the topology view - though these both take a couple extra minutes before the data is shown.

Click Check my progress to verify the objective. Apply virtual services.

Task 7. Route to a specific version of a service based on user identity

In this task, you change the route configuration so that all traffic from a specific user is routed to a specific service version. In this case, all traffic from user jason will be routed to the service reviews:v2, the version that includes the star ratings feature.

Note: Istio does not have any special, built-in understanding of user identity. This example is enabled by the fact that the productpage service adds a custom end-user header to all outbound HTTP requests to the reviews service.
  1. Review the configuration found in Github. This configuration defines 1 VirtualService resource.

  2. Apply the configuration with the following command in Cloud Shell:

    wget https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml sed -i 's#istio\.io/v1#istio\.io/v1alpha3#g' virtual-service-reviews-test-v2.yaml kubectl apply -f virtual-service-reviews-test-v2.yaml

    Output:

    virtualservice.networking.istio.io/reviews configured
  3. Confirm the rule is created:

    kubectl get virtualservice reviews

    Output:

    NAME GATEWAYS HOSTS AGE reviews ["reviews"] 35m
  4. Test the new routing configuration using the Bookinfo UI:

    • Browse again to /productpage of the Bookinfo application.

    • This time, click Sign in, and use User Name of jason with any password.

    • Notice the UI shows stars from the rating service.

    You can sign out, and try signing in as other users. You will no longer see stars with reviews.

To better visualize the effect of the new traffic routing, you can create a new background load of authenticated requests to the service.

  1. Start a new siege session, generating only 20% of the traffic of the first, but with all requests being authenticated as jason:

    curl -c cookies.txt -F "username=jason" -L -X \ POST http://$GATEWAY_URL/login cookie_info=$(grep -Eo "session.*" ./cookies.txt) cookie_name=$(echo $cookie_info | cut -d' ' -f1) cookie_value=$(echo $cookie_info | cut -d' ' -f2) siege -c 5 http://$GATEWAY_URL/productpage \ --header "Cookie: $cookie_name=$cookie_value"
  2. Wait for 1-2 minutes, refresh the page showing the Infrastructure telemetry, adjust the timeline to show the current time, and then check in the Service Mesh Dashboard and you should see that roughly 85% of requests over the last few minutes have gone to version 1 because they are unauthenticated. About 15% have gone to version two because they are made as jason.

  3. In Cloud Shell, cancel the siege session by typing Ctrl+c.

  4. Clean up from this task by removing the application virtual services:

    kubectl delete -f virtual-service-all-v1.yaml

    Output:

    virtualservice.networking.istio.io "productpage" deleted virtualservice.networking.istio.io "reviews" deleted virtualservice.networking.istio.io "ratings" deleted virtualservice.networking.istio.io "details" deleted
  5. You can wait for 1-2 minutes, refresh the Service Mesh dashboard, adjust the timeline to show the current time, and confirm that traffic is once again evenly balanced across versions.

Click Check my progress to verify the objective. User-Specific Routing Configuration.

Task 8. Shift traffic gradually from one version of a microservice to another

In this task, you gradually migrate traffic from one version of a microservice to another. For example, you might use this approach to migrate traffic from an older version to a new version.

You will send 50% of traffic to reviews:v1 and 50% to reviews:v3. Then, you will complete the migration by sending 100% of traffic to reviews:v3.

In Service Mesh, you accomplish this goal by configuring a sequence of rules that route a percentage of traffic to one service or another.

  1. In Cloud Shell, route all traffic to the v1 version of each service:

    kubectl apply -f virtual-service-all-v1.yaml

    Output:

    virtualservice.networking.istio.io/productpage created virtualservice.networking.istio.io/reviews created virtualservice.networking.istio.io/ratings created virtualservice.networking.istio.io/details created
  2. Browse again to /productpage of the Bookinfo application and confirm that you do not see stars with reviews. All traffic is being routed to the v1 backend.

  3. Wait 1 minute, then refresh the Service Mesh dashboard, adjust the timeline to show the current time, and confirm that all traffic has been routed to the v1 backend.

  4. Transfer 50% of the traffic from reviews:v1 to reviews:v3.

    wget https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml sed -i 's#istio\.io/v1#istio\.io/v1alpha3#g' virtual-service-reviews-50-v3.yaml kubectl apply -f virtual-service-reviews-50-v3.yaml

    Output:

    virtualservice.networking.istio.io/reviews configured
  5. Browse again to /productpage of the Bookinfo application.

  6. Refresh your view to issue multiple requests.

    Notice a roughly even distribution of reviews with no stars, from v1, and reviews with red stars, from v3, that accesses the ratings service.

  7. Wait 1 minute, then refresh the page, adjust the timeline to show the, current time, and confirm in the Service Mesh dashboard that traffic to the reviews service is split 50/50 between v1 and v3.

  8. Transfer the remaining 50% of traffic to reviews:v3.

  9. Assuming you decide that the reviews:v3 service is stable, route 100% of the traffic to reviews:v3 by applying this virtual service:

    wget https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/networking//virtual-service-reviews-v3.yaml sed -i 's#istio\.io/v1#istio\.io/v1alpha3#g' virtual-service-reviews-v3.yaml kubectl apply -f virtual-service-reviews-v3.yaml

    Output:

    virtualservice.networking.istio.io/reviews configured
  10. Test the new routing configuration using the Bookinfo UI.

  11. Browse again to /productpage of the Bookinfo application.

  12. Refresh the /productpage; you will always see book reviews with red colored star ratings for each review.

  13. Wait 1 minute, refresh the page, then confirm in the Service Mesh dashboard that all traffic to the reviews service is sent to v3.

  14. Clean up from this exercise, by removing the application virtual services.

    kubectl delete -f virtual-service-all-v1.yaml

    Output:

    virtualservice.networking.istio.io "productpage" deleted virtualservice.networking.istio.io "reviews" deleted virtualservice.networking.istio.io "ratings" deleted virtualservice.networking.istio.io "details" deleted

In this task you migrated traffic from an old version to a new version of the reviews service using Service Mesh weighted routing feature. This is very different than doing version migration using the deployment features of container orchestration platforms, which use instance scaling to manage the traffic.

Click Check my progress to verify the objective. Migrate traffic from v1 to v3.

Task 9. Add timeouts to avoid waiting indefinitely for service replies

A timeout for HTTP requests can be specified using the timeout field of the route rule. By default, the request timeout is disabled, but in this task you override the reviews service timeout to 1 second. To see its effect, however, you also introduce an artificial 2 second delay in calls to the ratings service. We will start by introducing the delay.

  1. In Cloud Shell, route all traffic to the v1 version of each service:

    kubectl apply -f virtual-service-all-v1.yaml

    Output:

    virtualservice.networking.istio.io/productpage created virtualservice.networking.istio.io/reviews created virtualservice.networking.istio.io/ratings created virtualservice.networking.istio.io/details created
  2. Route requests to v2 of the reviews service, i.e., a version that calls the ratings service:

    kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v2 EOF
  3. Add a 2 second delay to calls to the ratings service:

    kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - fault: delay: percent: 100 fixedDelay: 2s route: - destination: host: ratings subset: v1 EOF
  4. Open the Bookinfo URL http://$GATEWAY_URL/productpage in your browser. You should see the Bookinfo application working normally (with ratings stars displayed), but there is a 2 second delay whenever you refresh the page. (If the Bookinfo application does not work normally, change the delay to 1 second and try again.)

  5. Navigate to reviews / metrics to see that the latency is spiking to 2 seconds. (If you changed the delay to 1 second, the latency should spike to 1 second.)

  6. Now add a half second request timeout for calls to the reviews service:

    kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v2 timeout: 0.5s EOF
  7. Refresh the Bookinfo web page.

You should now see that it returns in about 1 second, instead of 2, and the reviews are unavailable.

Note: The reason why the response takes 1 second, even though the timeout is configured at half a second, is because there is a hard-coded retry in the productpage service, so it calls the timing out reviews service twice before returning. If you want to change the retry setting, configure the VirtualService by executing the command shown below. kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: productpage spec: hosts: - productpage http: - route: - destination: host: productpage subset: v1 retries: attempts: 1 perTryTimeout: 2s EOF
  1. Clean up from this exercise, by removing the application virtual services.

    kubectl delete -f virtual-service-all-v1.yaml

    Output:

    virtualservice.networking.istio.io "productpage" deleted virtualservice.networking.istio.io "reviews" deleted virtualservice.networking.istio.io "ratings" deleted virtualservice.networking.istio.io "details" deleted

In this task, you used Istio to set the request timeout for calls to the reviews microservice for half a second. By default the request timeout is disabled. Since the reviews service subsequently calls the ratings service when handling requests, you used Istio to inject a 2 second delay in calls to ratings to cause the reviews service to take longer than half a second to complete and consequently you could see the timeout in action.

You observed that instead of displaying reviews, the Bookinfo product page (which calls the reviews service to populate the page) displayed the message: "Sorry, product reviews are currently unavailable for this book". This was the result of it receiving the timeout error from the reviews service.

Click Check my progress to verify the objective. Add timeouts for rating service.

Task 10. Add circuit breakers to enhance your microservices' resiliency

This task shows you how to configure circuit breaking for connections, requests, and outlier detection.

Circuit breaking is an important pattern for creating resilient microservice applications. Circuit breaking allows you to write applications that limit the impact of failures, latency spikes, and other undesirable effects of network peculiarities.

In this task, you will configure circuit breaking rules and then test the configuration by intentionally “tripping” the circuit breaker.

  1. In Cloud Shell, route all traffic to the v1 version of each service:

    kubectl apply -f virtual-service-all-v1.yaml

    Output:

    virtualservice.networking.istio.io/productpage created virtualservice.networking.istio.io/reviews created virtualservice.networking.istio.io/ratings created virtualservice.networking.istio.io/details created
  2. Create a destination rule to apply circuit breaking settings when calling the productpage service:

    kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: productpage spec: host: productpage subsets: - name: v1 labels: version: v1 trafficPolicy: connectionPool: tcp: maxConnections: 1 http: http1MaxPendingRequests: 1 maxRequestsPerConnection: 1 outlierDetection: consecutive5xxErrors: 1 interval: 1s baseEjectionTime: 3m maxEjectionPercent: 100 EOF
  3. In Cloud Shell, go to the first tab and run Ctrl+c to stop the siege.

  4. Create a client to send traffic to the productpage service.

The client is a simple load-testing client called fortio. Fortio lets you control the number of connections, concurrency, and delays for outgoing HTTP calls. You will use this client to “trip” the circuit breaker policies you set in the DestinationRule:

kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.9/samples/httpbin/sample-client/fortio-deploy.yaml
  1. Log in to the client pod and use the fortio tool to call the productpage. Pass in curl to indicate that you just want to make one call:

    export FORTIO_POD=$(kubectl get pods -lapp=fortio -o 'jsonpath={.items[0].metadata.name}') kubectl exec "$FORTIO_POD" -c fortio -- /usr/bin/fortio curl -quiet http://${GATEWAY_URL}/productpage
  2. Call the service with two concurrent connections (-c 2) and send 20 requests (-n 20):

    kubectl exec "$FORTIO_POD" -c fortio -- /usr/bin/fortio load -c 2 -qps 0 -n 20 -loglevel Warning http://${GATEWAY_URL}/productpage

    It’s interesting to see that almost all requests made it through! That's interesting because maxConnections: 1 and http1MaxPendingRequests: 1. These rules indicate that if you exceed more than one connection and request concurrently, you should see some failures when the istio-proxy opens the circuit for further requests and connections.

    However, we see that the istio-proxy does allow for some leeway:

    Code 200 : 17 (85.0 %) Code 503 : 3 (15.0 %)
  3. Bring the number of concurrent connections up to 3:

    kubectl exec "$FORTIO_POD" -c fortio -- /usr/bin/fortio load -c 3 -qps 0 -n 30 -loglevel Warning http://${GATEWAY_URL}/productpage

    Now you start to see the expected circuit breaking behavior. Only 36.7% of the requests succeeded and the rest were trapped by circuit breaking:

    Code 200 : 11 (36.7 %) Code 503 : 19 (63.3 %)

Click Check my progress to verify the objective. Add circuit breakers.

Review

In this lab, you learned about many different ways to manage and route traffic for different purposes. You also experimented with adjusting and viewing traffic shifting for yourself, including some layer 7, application layer, routing, that looks at request headers.

Next steps

End your lab

When you have completed your lab, click End Lab. Google Cloud Skills Boost removes the resources you’ve used and cleans the account for you.

You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.

The number of stars indicates the following:

  • 1 star = Very dissatisfied
  • 2 stars = Dissatisfied
  • 3 stars = Neutral
  • 4 stars = Satisfied
  • 5 stars = Very satisfied

You can close the dialog box if you don't want to provide feedback.

For feedback, suggestions, or corrections, please use the Support tab.

Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

准备工作

  1. 实验会创建一个 Google Cloud 项目和一些资源,供您使用限定的一段时间
  2. 实验有时间限制,并且没有暂停功能。如果您中途结束实验,则必须重新开始。
  3. 在屏幕左上角,点击开始实验即可开始

此内容目前不可用

一旦可用,我们会通过电子邮件告知您

太好了!

一旦可用,我们会通过电子邮件告知您

一次一个实验

确认结束所有现有实验并开始此实验

使用无痕浏览模式运行实验

请使用无痕模式或无痕式浏览器窗口运行此实验。这可以避免您的个人账号与学生账号之间发生冲突,这种冲突可能导致您的个人账号产生额外费用。