
准备工作
- 实验会创建一个 Google Cloud 项目和一些资源,供您使用限定的一段时间
- 实验有时间限制,并且没有暂停功能。如果您中途结束实验,则必须重新开始。
- 在屏幕左上角,点击开始实验即可开始
Install Gateways to enable ingress
/ 20
Apply destination rules
/ 20
Apply virtual services
/ 20
User-Specific Routing Configuration
/ 10
Migrate traffic from v1 to v3
/ 10
Add timeouts for rating service
/ 10
Add circuit breakers
/ 10
A Cloud Service mesh is an architecture that enables managed, observable, and secure communication among your services, making it easier for you to create robust enterprise applications made up of many microservices on your chosen infrastructure. It manage the common requirements of running a service, such as monitoring, networking, and security, with consistent, powerful tools, making it easier for service developers and operators to focus on creating and managing great applications for their users.
Cloud Service Mesh’s traffic management model relies on the following two components:
These components enable mesh traffic management features including:
In this lab, you learn how to perform the following tasks:
In this task, you use Qwiklabs and perform initialization steps for your lab.
For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.
Sign in to Qwiklabs using an incognito window.
Note the lab's access time (for example, 1:15:00
), and make sure you can finish within that time.
There is no pause feature. You can restart if needed, but you have to start at the beginning.
When ready, click Start lab.
Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.
Click Open Google Console.
Click Use another account and copy/paste credentials for this lab into the prompts.
If you use other credentials, you'll receive errors or incur charges.
Accept the terms and skip the recovery resource page.
After you complete the initial sign-in steps, the project dashboard appears.
Google Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud.
Google Cloud Shell provides command-line access to your Google Cloud resources.
In Cloud console, on the top right toolbar, click the Open Cloud Shell button.
Click Continue.
It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
Output:
Example output:
Output:
Example output:
Different traffic management capabilities are enabled by using different configuration options.
Route traffic to multiple versions of a service.
Set a timeout, the amount of time Istio waits for a response to a request. The timeout for HTTP requests is 15 seconds, but it can be overridden.
A retry is an attempt to complete an operation multiple times if it fails. Adjust the maximum number of retry attempts, or the number of attempts possible within the default or overridden timeout period.
Fault injection is a testing method that introduces errors into a system to ensure that it can withstand and recover from error conditions.
This example introduces a 5 second delay in 10% of the requests to the "v1" version of the ratings microservice.
The above example returns an HTTP 400 error code for 10% of the requests to the ratings service "v1".
A rule can indicate that it only applies to calls from workloads (pods) implementing the version v2 of the reviews service.
The above rule only applies to an incoming request if it includes a custom "end-user" header that contains the string “atharvak”.
This lab environment has already been partially configured.
gke
was created.Set the Zone environment variable:
In Cloud Shell, set environment variables cluster name:
Configure kubectl
command line access by running:
If prompted, click the Authorize button.
Check that your cluster is up and running:
Output:
Ensure the Kubernetes pods for the Cloud Service Mesh control plane are deployed:
Output:
Pod status should be Running or Completed.
Ensure corresponding Kubernetes services for the Cloud Service Mesh control plane are deployed:
Output:
Confirm that the application has been deployed correctly:
Output:
Review running application services:
Output:
In a Kubernetes environment, the Kubernetes Ingress Resource is used to specify services that should be exposed outside the cluster. In Cloud Service Mesh, a better approach, which also works in Kubernetes and other environments, is to use a Gateway resource. A Gateway allows mesh features such as monitoring, mTLS, and advanced routing capabilities rules to be applied to traffic entering the cluster.
Gateways overcome Kubernetes Ingress shortcomings by separating the L4-L6 spec from L7. The Gateway configures the L4-L6 functions, such as the ports to expose, or the protocol to use. Then service owners bind VirtualService to configure L7 traffic routing options, such as routing based on paths, headers, weights, etc.
There are two options for deploying gateways, either shared or dedicated. Shared gateways use a single centralized gateway is that used by many applications, possibly across many namespaces. In the example below, the Gateway in the ingress namespace delegates ownership of routes to application namespaces, but retains control over TLS configuration. This works well when using shared TLS certificates or shared infrastructure. In this lab we will use this option.
Dedicated gateways give full control and ownership to a single namespace, since an application namespace has its own dedicated gateway. This works well for applications that require isolation for security or performance.
Create a namespace for the gateway:
Label the gateway namespace with a revision label for auto-injection:
The revision label is used by the sidecar injector webhook to associate injected proxies with a particular control plane revision.
You can ignore the message "istio-injection not found" in the output. That means that the namespace didn't previously have the istio-injection label, which you should expect in new installations of Service Mesh or new deployments.
Because auto-injection fails if a namespace has both the istio-injection and the revision label, all kubectl label commands in the Service Mesh documentation includes removing the istio-injection label.
Download and apply the gateway configuration files. These include the pods and services that will first receive the incoming requests from outside the cluster:
After you create the deployment, verify that the new services are working:
Notice the resource is a LoadBalancer
. This ingress gateway uses an
external TCP load balancer in GCP.
Deploy the Gateway to specify the port and protocol to be used. In this
case, the gateway enables HTTP
traffic over port 80:
The Gateway resource must be located in the same namespace as the gateway deployment.
Deploy the VirtualService to route traffic from the gateway pods and service that you just created into the BookInfo application:
The VirtualService resource must be located in the same namespace as the
application. Notice that it establishes the productpage
service as the
default destination.
Verify that the Gateway and VirtualService have been created and notice that the VirtualService is pointing to the Gateway:
Save this external IP in your Cloud Shell environment:
Generate some background traffic against the application so that when you explore the Service Mesh dashboard, there's some interesting data to view.
In Cloud Shell, install siege, a load generation tool:
Use siege to create traffic against your services:
In Cloud Shell, open another tab by clicking on the + icon in the Cloud Shell menu bar.
Set the Zone environment variable:
Initialize the new Cloud Shell tab:
Confirm that the Bookinfo application responds by sending a curl
request
to it from some pod, within the cluster, for example from ratings
:
Output:
Check that the Bookinfo app responds to a curl
request sent to it
from outside the cluster, using the external IP saved earlier:
Output:
Open the Bookinfo application in your browser. Run this command in the Cloud Shell to get the full URL:
Congratulations! You exposed an HTTP endpoint for the Bookinfo productpage service to external traffic. The Gateway configuration resources allow external traffic to enter the service mesh and make the traffic management and policy features available for edge services.
Click Check my progress to verify the objective.
There are a couple of items to note when it comes to viewing data in the Service Mesh dashboard.
The first is that, for most pages, it takes 1-2 minutes for the data to be available for display. That means that if you look at a page, it might not have the data you expect for 1-2 minutes. If you don't see the data you want, wait for a minute or so and then refresh the page.
The Topology page also has a big initial delay before data is shown. It can take up to 5+ minutes for the initial set of data to be available. If you see a message that there is no data, wait a bit and then refresh the page and return to the Topology view.
In the previous paragraphs, you are instructed to wait AND to refresh the page. As it turns out, not only is the data a bit delayed in arriving, but many pages won't show the available data without a page refresh. So if you expect the data to be available and you don't see it, make sure to do a refresh of the page in your browser.
From the Navigation menu, select Kubernetes Engine > Features > Service Mesh.
Click on the reviews service.
Note the service statistics, then select the Infrastructure link on the left-hand menu.
You can see that there are multiple pods, running different versions of the reviews logic, that receive traffic sent to the reviews service.
Click on Traffic in the left-hand menu to see another view of traffic distribution.
You can see that there is relatively even distribution of traffic across the three backend pods running the different versions of the application logic.
Rearrange the mesh graph so that you can easily view:
Click on the reviews service node and see relative qps for each backend version.
In this task, you define all the available versions, called subsets, in destination rules.
Review the configuration found in
Github. This configuration defines 4 DestinationRule
resources, 1 for each service.
Apply the configuration with the following command in Cloud Shell:
Output:
Check that 4 DestinationRule
resources were defined.
Output:
Review the details of the destination rules:
Notice that subsets
are defined within the spec
of a
DestinationRule
.
Wait for 1-2 minutes, then return to the Service Mesh dashboard.
Look in both the table and topology views and confirm that the traffic continues to be evenly distributed across the three backend versions. You can click SHOW TIMELINE to adjust the period of time that is being charted, making it easier to zero in on the data you are interested in.
Click Check my progress to verify the objective.
In this task, you apply virtual services for each service that routes all traffic to v1 of the service workload.
Review the configuration found in
Github. This configuration defines 4 VirtualService
resources, 1 for each service.
Apply the configuration with the following command in Cloud Shell:
Output:
Because configuration propagation is eventually consistent, wait a few seconds for the virtual services to take effect.
Check that 4 routes, VirtualService
resources, were defined:
Output:
In Cloud Shell, get the external IP address of the ingress gateway:
Test the new routing configuration using the Bookinfo UI.
Open the Bookinfo site in your browser. The URL is
http://[GATEWAY_URL]/productpage
, where GATEWAY_URL
is the External IP
address of the ingress.
Refresh the page a few times to issue multiple requests.
Notice that the Book Reviews part of the page displays with no rating stars, no matter how many times you refresh. This is because you configured the mesh to route all traffic for the reviews service to the version reviews:v1 and this version of the service does not access the star ratings service.
Wait for 1-2 minutes, then return to the Service Mesh dashboard by navigating to Navigation menu > Kubernetes Engine > Service Mesh > reviews > Infrastructure.
Select SHOW TIMELINE and focus the chart on the last 5 minutes of traffic. You should see that the traffic goes from being evenly distributed to being routed to the version 1 workload 100% of the time.
You can also see the new traffic distribution by looking at the Traffic tab or the topology view - though these both take a couple extra minutes before the data is shown.
Click Check my progress to verify the objective.
In this task, you change the route configuration so that all traffic from a
specific user is routed to a specific service version. In this case, all traffic
from user jason
will be routed to the service reviews:v2
, the version that
includes the star ratings feature.
Review the configuration found in
Github. This configuration defines 1 VirtualService
resource.
Apply the configuration with the following command in Cloud Shell:
Output:
Confirm the rule is created:
Output:
Test the new routing configuration using the Bookinfo UI:
Browse again to /productpage
of the Bookinfo application.
This time,
click Sign in, and use User Name of jason
with any password.
Notice the UI shows stars from the rating service.
You can sign out, and try signing in as other users. You will no longer see stars with reviews.
To better visualize the effect of the new traffic routing, you can create a new background load of authenticated requests to the service.
Start a new siege session, generating only 20% of the traffic of the first,
but with all requests being authenticated as jason
:
Wait for 1-2 minutes, refresh the page showing the Infrastructure telemetry,
adjust the timeline to show the current time, and then check in the Service
Mesh Dashboard and you should see that roughly 85% of requests over the last few
minutes have gone to version 1 because they are unauthenticated. About 15%
have gone to version two because they are made as jason
.
In Cloud Shell, cancel the siege session by typing Ctrl+c
.
Clean up from this task by removing the application virtual services:
Output:
You can wait for 1-2 minutes, refresh the Service Mesh dashboard, adjust the timeline to show the current time, and confirm that traffic is once again evenly balanced across versions.
Click Check my progress to verify the objective.
In this task, you gradually migrate traffic from one version of a microservice to another. For example, you might use this approach to migrate traffic from an older version to a new version.
You will send 50% of traffic to reviews:v1 and 50% to reviews:v3. Then, you will complete the migration by sending 100% of traffic to reviews:v3.
In Service Mesh, you accomplish this goal by configuring a sequence of rules that route a percentage of traffic to one service or another.
In Cloud Shell, route all traffic to the v1 version of each service:
Output:
Browse again to /productpage
of the Bookinfo application and confirm
that you do not see stars with reviews. All traffic is being routed
to the v1 backend.
Wait 1 minute, then refresh the Service Mesh dashboard, adjust the timeline to show the current time, and confirm that all traffic has been routed to the v1 backend.
Transfer 50% of the traffic from reviews:v1 to reviews:v3.
Output:
Browse again to /productpage
of the Bookinfo application.
Refresh your view to issue multiple requests.
Notice a roughly even distribution of reviews with no stars, from v1, and reviews with red stars, from v3, that accesses the ratings service.
Wait 1 minute, then refresh the page, adjust the timeline to show the, current time, and confirm in the Service Mesh dashboard that traffic to the reviews service is split 50/50 between v1 and v3.
Transfer the remaining 50% of traffic to reviews:v3.
Assuming you decide that the reviews:v3 service is stable, route 100% of the traffic to reviews:v3 by applying this virtual service:
Output:
Test the new routing configuration using the Bookinfo UI.
Browse again to /productpage
of the Bookinfo application.
Refresh the /productpage
; you will always see book reviews with red colored star ratings for each review.
Wait 1 minute, refresh the page, then confirm in the Service Mesh dashboard that all traffic to the reviews service is sent to v3.
Clean up from this exercise, by removing the application virtual services.
Output:
In this task you migrated traffic from an old version to a new version of the reviews service using Service Mesh weighted routing feature. This is very different than doing version migration using the deployment features of container orchestration platforms, which use instance scaling to manage the traffic.
Click Check my progress to verify the objective.
A timeout for HTTP requests can be specified using the timeout field of the route rule. By default, the request timeout is disabled, but in this task you override the reviews service timeout to 1 second. To see its effect, however, you also introduce an artificial 2 second delay in calls to the ratings service. We will start by introducing the delay.
In Cloud Shell, route all traffic to the v1 version of each service:
Output:
Route requests to v2 of the reviews service, i.e., a version that calls the ratings service:
Add a 2 second delay to calls to the ratings service:
Open the Bookinfo URL http://$GATEWAY_URL/productpage
in your browser. You
should see the Bookinfo application working normally (with ratings stars
displayed), but there is a 2 second delay whenever you refresh the page. (If the Bookinfo application does not work normally, change the delay to 1 second and try again.)
Navigate to reviews / metrics to see that the latency is spiking to 2 seconds. (If you changed the delay to 1 second, the latency should spike to 1 second.)
Now add a half second request timeout for calls to the reviews service:
Refresh the Bookinfo web page.
You should now see that it returns in about 1 second, instead of 2, and the reviews are unavailable.
Clean up from this exercise, by removing the application virtual services.
Output:
In this task, you used Istio to set the request timeout for calls to the reviews microservice for half a second. By default the request timeout is disabled. Since the reviews service subsequently calls the ratings service when handling requests, you used Istio to inject a 2 second delay in calls to ratings to cause the reviews service to take longer than half a second to complete and consequently you could see the timeout in action.
You observed that instead of displaying reviews, the Bookinfo product page (which calls the reviews service to populate the page) displayed the message: "Sorry, product reviews are currently unavailable for this book". This was the result of it receiving the timeout error from the reviews service.
Click Check my progress to verify the objective.
This task shows you how to configure circuit breaking for connections, requests, and outlier detection.
Circuit breaking is an important pattern for creating resilient microservice applications. Circuit breaking allows you to write applications that limit the impact of failures, latency spikes, and other undesirable effects of network peculiarities.
In this task, you will configure circuit breaking rules and then test the configuration by intentionally “tripping” the circuit breaker.
In Cloud Shell, route all traffic to the v1 version of each service:
Output:
Create a destination rule to apply circuit breaking settings when calling the productpage service:
In Cloud Shell, go to the first tab and run Ctrl+c
to stop the siege.
Create a client to send traffic to the productpage service.
The client is a simple load-testing client called fortio. Fortio lets you control the number of connections, concurrency, and delays for outgoing HTTP calls. You will use this client to “trip” the circuit breaker policies you set in the DestinationRule:
Log in to the client pod and use the fortio tool to call the productpage
.
Pass in curl to indicate that you just want to make one call:
Call the service with two concurrent connections (-c 2
) and send 20
requests (-n 20
):
It’s interesting to see that almost all requests made it through! That's
interesting because maxConnections: 1
and http1MaxPendingRequests: 1
.
These rules indicate that if you exceed more than one connection and
request concurrently, you should see some failures when the istio-proxy
opens the circuit for further requests and connections.
However, we see that the istio-proxy does allow for some leeway:
Bring the number of concurrent connections up to 3:
Now you start to see the expected circuit breaking behavior. Only 36.7% of the requests succeeded and the rest were trapped by circuit breaking:
Click Check my progress to verify the objective.
In this lab, you learned about many different ways to manage and route traffic for different purposes. You also experimented with adjusting and viewing traffic shifting for yourself, including some layer 7, application layer, routing, that looks at request headers.
When you have completed your lab, click End Lab. Google Cloud Skills Boost removes the resources you’ve used and cleans the account for you.
You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.
The number of stars indicates the following:
You can close the dialog box if you don't want to provide feedback.
For feedback, suggestions, or corrections, please use the Support tab.
Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.
此内容目前不可用
一旦可用,我们会通过电子邮件告知您
太好了!
一旦可用,我们会通过电子邮件告知您
一次一个实验
确认结束所有现有实验并开始此实验