Prüfpunkte
Create GCS bucket
/ 10
Copy startup script and code to Cloud Storage bucket
/ 10
Deploy instances and configure network
/ 20
Create managed instance groups
/ 20
Create HTTP(S) load balancers
/ 10
Update the frontend instances
/ 10
Scaling GCE
/ 10
Update the website
/ 10
Hosting a Web App on Google Cloud Using Compute Engine - Azure
- GSP1123
- Overview
- Objectives
- Setup and requirements
- Task 1. Enable Compute Engine API
- Task 2. Create Cloud Storage bucket
- Task 3. Clone source repository
- Task 4. Create Compute Engine instances
- Task 5. Create managed instance groups
- Task 6. Create load balancers
- Task 7. Scaling Compute Engine
- Task 8. Update the website
- Congratulations!
GSP1123
Overview
Managing application creation, deployment, and ongoing updates is core to cloud operations. Key questions center around resource optimization to meet SLAs, code change deployment, seamless VM updates, and network/load balancing configuration. Azure leverages DevOps pipelines, virtual machine scale sets for autoscaling, and Azure IAM for access control. Service principals are crucial for application execution and communication, while virtual networks, load balancers, health probes, and availability sets complete the infrastructure setup. Updating VMs may involve image updates, redeployments, or managed instance updates with scheduled maintenance.
Google Cloud offers comparable flexibility. Startup scripts can fetch code updates, facilitating rolling updates for zero downtime. While solutions like GKE and App Engine exist, Compute Engine grants granular control over VMs and load balancers. This lab will demonstrate deploying and scaling the "Fancy Store" e-commerce application on Compute Engine, highlighting this powerful approach to web application management.
Objectives
In this lab you'll learn how to:
- Create Compute Engine instances
- Create instance templates from source instances
- Create managed instance groups
- Create and test managed instance group health checks
- Create HTTP(S) Load Balancers
- Create load balancer health checks
- Use a Content Delivery Network (CDN) for Caching
At the end of the lab, you will have instances inside managed instance groups to provide autohealing, load balancing, autoscaling, and rolling updates for our website.
Setup and requirements
Before you click the Start Lab button
Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.
This hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
- Access to a standard internet browser (Chrome browser recommended).
- Time to complete the lab---remember, once you start, you cannot pause a lab.
How to start your lab and sign in to the Google Cloud console
-
Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:
- The Open Google Cloud console button
- Time remaining
- The temporary credentials that you must use for this lab
- Other information, if needed, to step through this lab
-
Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).
The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
Note: If you see the Choose an account dialog, click Use Another Account. -
If necessary, copy the Username below and paste it into the Sign in dialog.
{{{user_0.username | "Username"}}} You can also find the Username in the Lab Details panel.
-
Click Next.
-
Copy the Password below and paste it into the Welcome dialog.
{{{user_0.password | "Password"}}} You can also find the Password in the Lab Details panel.
-
Click Next.
Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials. Note: Using your own Google Cloud account for this lab may incur extra charges. -
Click through the subsequent pages:
- Accept the terms and conditions.
- Do not add recovery options or two-factor authentication (because this is a temporary account).
- Do not sign up for free trials.
After a few moments, the Google Cloud console opens in this tab.
Activate Cloud Shell
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
- Click Activate Cloud Shell at the top of the Google Cloud console.
When you are connected, you are already authenticated, and the project is set to your Project_ID,
gcloud
is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
- (Optional) You can list the active account name with this command:
- Click Authorize.
Output:
- (Optional) You can list the project ID with this command:
Output:
gcloud
, in Google Cloud, refer to the gcloud CLI overview guide.
Set the default zone
- Set the default zone and project configuration:
Learn more from the Regions & Zones documentation.
gcloud
on your own machine, the config settings are persisted across sessions. But in Cloud Shell, you need to set this for every new session or reconnection. Task 1. Enable Compute Engine API
Next, enable the Compute Engine API.
- Execute the following to enable the Compute Engine API:
Task 2. Create Cloud Storage bucket
You will use a Cloud Storage bucket to house your built code as well as your startup scripts.
- From within Cloud Shell, execute the following to create a new Cloud Storage bucket:
$DEVSHELL_PROJECT_ID
environment variable within Cloud Shell is to help ensure the names of objects are unique. Since all Project IDs within Google Cloud must be unique, appending the Project ID should make other names unique as well. Click Check my progress to verify the objective.
Task 3. Clone source repository
You will be using the existing Fancy Store ecommerce website based on the monolith-to-microservices
repository as the basis for your website.
You will clone the source code so you can focus on the aspects of deploying to Compute Engine. Later on in this lab, you will perform a small update to the code to demonstrate the simplicity of updating on Compute Engine.
- Clone the source code and then navigate to the
monolith-to-microservices
directory:
- Run the initial build of the code to allow the application to run locally:
It will take a few minutes for this script to finish.
- Once completed, ensure Cloud Shell is running a compatible nodeJS version with the following command:
- Next, run the following to test the application, switch to the
microservices
directory, and start the web server:
You should see the following output:
- Preview your application by clicking the web preview icon then selecting Preview on port 8080.
This opens a new window where you can see the frontend of Fancy Store.
- Close this window after viewing the website and then press CTRL+C in the terminal window to stop the web server process.
Task 4. Create Compute Engine instances
Now it's time to start deploying some Compute Engine instances!
In the following steps you will:
- Create a startup script to configure instances.
- Clone source code and upload to Cloud Storage.
- Deploy a Compute Engine instance to host the backend microservices.
- Reconfigure the frontend code to utilize the backend microservices instance.
- Deploy a Compute Engine instance to host the frontend microservice.
- Configure the network to allow communication.
Create the startup script
A startup script will be used to instruct the instance what to do each time it is started. This way the instances are automatically configured.
- Click Open Editor in the Cloud Shell ribbon to open the Code Editor.
-
Navigate to the
monolith-to-microservices
folder. -
Click on File > New File and create a file called
startup-script.sh
-
Add the following code to the file. You will edit some of the code after it's added:
- Find the text
[DEVSHELL_PROJECT_ID]
in the file and replace it with the output from the following command:
Example output:
The line of code within startup-script.sh
should now be similar to the following:
-
Save the file, then close it.
-
Cloud Shell Code Editor: Ensure "End of Line Sequence" is set to "LF" and not "CRLF". Check by looking at the bottom right of the Code Editor:
- If this is set to CRLF, click CRLF and then select LF in the drop down.
The startup script performs the following tasks:
- Installs the Logging agent. The agent automatically collects logs from syslog.
- Installs Node.js and Supervisor. Supervisor runs the app as a daemon.
- Clones the app's source code from Cloud Storage Bucket and installs dependencies.
- Configures Supervisor to run the app. Supervisor makes sure the app is restarted if it exits unexpectedly or is stopped by an admin or process. It also sends the app's stdout and stderr to syslog for the Logging agent to collect.
- Return to Cloud Shell Terminal and run the following to copy the
startup-script.sh
file into your bucket:
It will now be accessible at: https://storage.googleapis.com/[BUCKET_NAME]/startup-script.sh
.
[BUCKET_NAME] represents the name of the Cloud Storage bucket. This will only be viewable by authorized users and service accounts by default, so inaccessible through a web browser. Compute Engine instances will automatically be able to access this through their service account.
Copy code into the Cloud Storage bucket
When instances launch, they pull code from the Cloud Storage bucket, so you can store some configuration variables within the .env
file of the code.
- Copy the cloned code into your bucket:
node_modules
dependencies directories are deleted to ensure the copy is as fast and efficient as possible. These are recreated on the instances when they start up. Click Check my progress to verify the objective.
Deploy the backend instance
The first instance to be deployed will be the backend instance which will house the Orders and Products microservices.
- Execute the following command to create an
e2-medium
instance that is configured to use the startup script. It is tagged as abackend
instance so you can apply specific firewall rules to it later:
Configure a connection to the backend
Before you deploy the frontend of the application, you need to update the configuration to point to the backend you just deployed.
- Retrieve the external IP address of the backend with the following command, look under the
EXTERNAL_IP
tab for the backend instance:
Example output:
-
Copy the External IP for the backend.
-
In the Cloud Shell Explorer, navigate to
monolith-to-microservices
>react-app
. -
In the Code Editor, select View > Toggle Hidden Files in order to see the
.env
file. -
Edit the
.env
file to point to the External IP of the backend. [BACKEND_ADDRESS] represents the External IP address of the backend instance determined from the abovegcloud
command. -
In the
.env
file, replacelocalhost
with your[BACKEND_ADDRESS]
:
-
Save the file.
-
Rebuild
react-app
, which will update the frontend code:
- Then copy the application code into the Cloud Storage bucket:
Deploy the frontend instance
Now that the code is configured, deploy the frontend instance.
- Execute the following to deploy the
frontend
instance with a similar command as before. This instance is tagged asfrontend
for firewall purposes:
Configure the network
- Create firewall rules to allow access to port 8080 for the frontend, and ports 8081-8082 for the backend. These firewall commands use the tags assigned during instance creation for application:
The website should now be fully functional.
- In order to navigate to the external IP of the
frontend
, you need to know the address. Run the following and look for the EXTERNAL_IP of thefrontend
instance:
Example output:
It may take a couple minutes for the instance to start and be configured.
- Wait 30 seconds, then execute the following to monitor for the application becoming ready, replacing FRONTEND_ADDRESS with the External IP for the frontend instance:
Once you see output similar to the following, the website should be ready..
-
Press CTRL+C to cancel the
watch
command -
Open a new browser tab and browse to
http://[FRONTEND_ADDRESS]:8080
to access the website, where [FRONTEND_ADDRESS] is the frontend EXTERNAL_IP determined above. -
Try navigating to the Products and Orders pages; these should now work.
Click Check my progress to verify the objective.
Task 5. Create managed instance groups
To allow the application to scale, managed instance groups will be created and will use the frontend
and backend
instances as Instance Templates.
A managed instance group (MIG) contains identical instances that you can manage as a single entity in a single zone. Managed instance groups maintain high availability of your apps by proactively keeping your instances available, that is, in the RUNNING state. We will be using managed instance groups for our frontend and backend instances to provide autohealing, load balancing, autoscaling, and rolling updates.
Create instance template from source instance
Before you can create a managed instance group, you have to first create an instance template that will be the foundation for the group. Instance templates allow you to define the machine type, boot disk image or container image, network, and other instance properties to use when creating new VM instances. You can use instance templates to create instances in a managed instance group or even to create individual instances.
To create the instance template, use the existing instances you created previously.
- First, stop both instances:
- Then, create the instance template from each of the source instances:
- Confirm the instance templates were created:
Example output:
- With the instance templates created, delete the
backend
vm to save resource space:
- Type and enter y when prompted.
Normally, you could delete the frontend
vm as well, but you will use it to update the instance template later in the lab.
Create managed instance group
- Next, create two managed instance groups, one for the frontend and one for the backend:
These managed instance groups will use the instance templates and are configured for two instances each within each group to start. The instances are automatically named based on the base-instance-name
specified with random characters appended.
- For your application, the
frontend
microservice runs on port 8080, and thebackend
microservice runs on port 8081 fororders
and port 8082 for products:
Since these are non-standard ports, you specify named ports to identify these. Named ports are key:value pair metadata representing the service name and the port that it's running on. Named ports can be assigned to an instance group, which indicates that the service is available on all instances in the group. This information is used by the HTTP Load Balancing service that will be configured later.
Configure autohealing
To improve the availability of the application itself and to verify it is responding, configure an autohealing policy for the managed instance groups.
An autohealing policy relies on an application-based health check to verify that an app is responding as expected. Checking that an app responds is more precise than simply verifying that an instance is in a RUNNING state, which is the default behavior.
Note: Separate health checks for load balancing and for autohealing will be used. Health checks for load balancing can and should be more aggressive because these health checks determine whether an instance receives user traffic. You want to catch non-responsive instances quickly so you can redirect traffic if necessary.
In contrast, health checking for autohealing causes Compute Engine to proactively replace failing instances, so this health check should be more conservative than a load balancing health check.- Create a health check that repairs the instance if it returns "unhealthy" 3 consecutive times for the
frontend
andbackend
:
- Create a firewall rule to allow the health check probes to connect to the microservices on ports 8080-8081:
- Apply the health checks to their respective services:
- Continue with the lab to allow some time for autohealing to monitor the instances in the group. You will simulate a failure to test the autohealing at the end of the lab.
Click Check my progress to verify the objective.
Task 6. Create load balancers
To complement our managed instance groups, you will be using an HTTP(S) Load Balancers to serve traffic to the frontend and backend microservices, and using mappings to send traffic to the proper backend services based on pathing rules. This will expose a single load balanced IP for all services.
You can learn more about the Load Balancing options on Google Cloud: Overview of Load Balancing.
Create HTTP(S) load balancer
Google Cloud offers many different types of load balancers. For this lab you use an HTTP(S) Load Balancer for your traffic. An HTTP load balancer is structured as follows:
- A forwarding rule directs incoming requests to a target HTTP proxy.
- The target HTTP proxy checks each request against a URL map to determine the appropriate backend service for the request.
- The backend service directs each request to an appropriate backend based on serving capacity, zone, and instance health of its attached backends. The health of each backend instance is verified using an HTTP health check. If the backend service is configured to use an HTTPS or HTTP/2 health check, the request will be encrypted on its way to the backend instance.
- Sessions between the load balancer and the instance can use the HTTP, HTTPS, or HTTP/2 protocol. If you use HTTPS or HTTP/2, each instance in the backend services must have an SSL certificate.
- Create health checks that will be used to determine which instances are capable of serving traffic for each service:
- Create backend services that are the target for load-balanced traffic. The backend services will use the health checks and named ports you created:
- Add the Load Balancer's backend services:
- Create a URL map. The URL map defines which URLs are directed to which backend services:
- Create a path matcher to allow the
/api/orders
and/api/products
paths to route to their respective services:
- Create the proxy which ties to the URL map:
- Create a global forwarding rule that ties a public IP address and port to the proxy:
Click Check my progress to verify the objective.
Update the configuration
Now that you have a new static IP address, update the code on the frontend
to point to this new address instead of the ephemeral address used earlier that pointed to the backend
instance.
- In Cloud Shell, change to the
react-app
folder which houses the.env
file that holds the configuration:
- Find the IP address for the Load Balancer:
Example output:
- Return to the Cloud Shell Editor and edit the
.env
file again to point to Public IP of Load Balancer. [LB_IP] represents the External IP address of the backend instance determined above.
-
Save the file.
-
Rebuild
react-app
, which will update the frontend code:
- Copy the application code into your bucket:
Update the frontend instances
Now that there is new code and configuration, you want the frontend instances within the managed instance group to pull the new code.
- Since your instances pull the code at startup, you can issue a rolling restart command:
--max-unavailable
parameter. Without this parameter, the command would keep an instance alive while restarting others to ensure availability. For testing purposes, you specify to replace all immediately for speed. Click Check my progress to verify the objective.
Test the website
- Wait approximately 30 seconds after issues the
rolling-action replace
command in order to give the instances time to be processed, and then check the status of the managed instance group until instances appear in the list:
-
Once items appear in the list, exit the
watch
command by pressing CTRL+C. -
Run the following to confirm the service is listed as HEALTHY:
- Wait until the 2 services are listed as HEALTHY.
Example output:
If neither instance enters a HEALTHY state after waiting a little while, something is wrong with the setup of the frontend instances that accessing them on port 8080 doesn't work. Test this by browsing to the instances directly on port 8080.
- Once both items appear as HEALTHY on the list, exit the
watch
command by pressing CTRL+C.
gcloud compute forwarding-rules list --global
Task 7. Scaling Compute Engine
So far, you have created two managed instance groups with two instances each. This configuration is fully functional, but a static configuration regardless of load. Next you will create an autoscaling policy based on utilization to automatically scale each managed instance group.
Automatically resize by utilization
- To create the autoscaling policy, execute the following:
These commands create an autoscaler on the managed instance groups that automatically adds instances when utilization is above 60% utilization, and removes instances when the load balancer is below 60% utilization.
Enable content delivery network
Another feature that can help with scaling is to enable a Content Delivery Network service, to provide caching for the frontend.
- Execute the following command on the frontend service:
When a user requests content from the HTTP(S) load balancer, the request arrives at a Google Front End (GFE) which first looks in the Cloud CDN cache for a response to the user's request. If the GFE finds a cached response, the GFE sends the cached response to the user. This is called a cache hit.
If the GFE can't find a cached response for the request, the GFE makes a request directly to the backend. If the response to this request is cacheable, the GFE stores the response in the Cloud CDN cache so that the cache can be used for subsequent requests.
Click Check my progress to verify the objective.
Task 8. Update the website
Updating instance template
Existing instance templates are not editable; however, since your instances are stateless and all configuration is done through the startup script, you only need to change the instance template if you want to change the template settings . Now you're going to make a simple change to use a larger machine type and push that out.
-
Update the
frontend
instance, which acts as the basis for the instance template. During the update, you will put a file on the updated version of the instance template's image, then update the instance template, roll out the new template, and then confirm the file exists on the managed instance group instances. -
Now modify the machine type of your instance template, by switching from the e2-medium machine type into a custom machine type with 4 vCPU and 3840MiB RAM.
-
Run the following command to modify the machine type of the frontend instance:
- Create the new Instance Template:
- Roll out the updated instance template to the Managed Instance Group:
- Wait 30 seconds then run the following to monitor the status of the update:
This will take a few moments.
Once you have at least 1 instance in the following condition:
- STATUS: RUNNING
- ACTION set to None
- INSTANCE_TEMPLATE: the new template name (fancy-fe-new)
-
Copy the name of one of the machines listed for use in the next command.
-
CTRL+C to exit the
watch
process. -
Run the following to see if the virtual machine is using the new machine type (custom-4-3840), where [VM_NAME] is the newly created instance:
Expected example output:
Make changes to the website
Scenario: Your marketing team has asked you to change the homepage for your site. They think it should be more informative of who your company is and what you actually sell.
Task: Add some text to the homepage to make the marketing team happy! It looks like one of the developers has already created the changes with the file name index.js.new
. You can just copy this file to index.js
and the changes should be reflected. Follow the instructions below to make the appropriate changes.
- Run the following commands to copy the updated file to the correct file name:
- Print the file contents to verify the changes:
The resulting code should look like this:
Tired of mainstream fashion ideas, popular trends and societal norms? This line of lifestyle products will help you catch up with the Fancy trend and express your personal style. Start shopping Fancy items now! ); }
You updated the React components, but you need to build the React app to generate the static files.
- Run the following command to build the React app and copy it into the monolith public directory:
- Then re-push this code to the bucket:
Push changes with rolling replacements
- Now force all instances to be replaced to pull the update:
Note: In this example of a rolling replace, you specifically state that all machines can be replaced immediately through the --max-unavailable
parameter. Without this parameter, the command would keep an instance alive while replacing others.
Click Check my progress to verify the objective.
- Wait approximately 30 seconds after issuing the
rolling-action replace
command in order to give the instances time to be processed, and then check the status of the managed instance group until instances appear in the list:
-
Once items appear in the list, exit the
watch
command by pressing CTRL+C. -
Run the following to confirm the service is listed as HEALTHY:
- Wait a few moments for both services to appear and become HEALTHY.
Example output:
-
Once items appear in the list, exit the
watch
command by pressing CTRL+C. -
Browse to the website via
http://[LB_IP]
where [LB_IP] is the IP_ADDRESS specified for the Load Balancer, which can be found with the following command:
The new website changes should now be visible.
Simulate failure
In order to confirm the health check works, log in to an instance and stop the services.
- To find an instance name, execute the following:
- Copy an instance name, then run the following to secure shell into the instance, where INSTANCE_NAME is one of the instances from the list:
-
Type in "y" to confirm, and press Enter twice to not use a password.
-
Within the instance, use
supervisorctl
to stop the application:
- Exit the instance:
- Monitor the repair operations:
This will take a few minutes to complete.
Look for the following example output:
The managed instance group recreated the instance to repair it.
- You can also monitor through the Console - go to Navigation menu > Compute Engine > VM instances.
Congratulations!
Congratulations! In this lab, you've successfully deployed, scaled, and updated your website on Compute Engine, gaining hands-on experience with Compute Engine, Managed Instance Groups, Load Balancers, and Health Checks! This lab highlights how both Google Cloud and Azure facilitate VM-based application deployments with internet-facing load balancers. Google Cloud's approach leverages startup scripts to unify code deployment and VM configuration, streamlines VM management with autoscaling, offers global external HTTP(S) load balancing, and enables seamless image and application updates within the startup script and MIG workflow.
Let’s review the similarities and differences between the two platforms that you noticed in the lab.
Similarities:
- Google Cloud, similar to Azure, lets you deploy applications to VMs and configure load balancers to make the applications available to the internet.
Differences:
- In Google Cloud, you deployed an application service using Compute Engine from code hosted in a Cloud Storage bucket. In Azure, you would need to use a DevOps pipeline or code repository to reach the same point.
- In Google Cloud, you managed the code deployment, VM configuration, and application update using the same startup script. In Azure, you would need to manage the code deployment from DevOps pipelines or code repository, and separately manage the VMs using Scale Sets.
- In Google Cloud, you updated your application’s code and re-deployed the application using a rolling deployment model with no downtime. In Azure, a maintenance window is used for updates.
- In Google Cloud, you can enable autoscaling for managed instance groups (MIGs). Google Cloud maintains and updates the VMs in your MIGs. In Azure, you use scaling sets.
Next steps / Learn more
- To learn more, check out Autohealing and health checks in Managed Instance Groups
Google Cloud training and certification
...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.
Manual Last Updated March 18, 2024
Lab Last Tested February 05, 2024
Copyright 2024 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.