
Before you begin
- Labs create a Google Cloud project and resources for a fixed time
- Labs have a time limit and no pause feature. If you end the lab, you'll have to restart from the beginning.
- On the top left of your screen, click Start lab to begin
Create a private cluster
/ 30
Add an authorized network for cluster master access
/ 35
Run web server applications
/ 35
In this lab, you will create a private cluster, add an authorized network for API access to it, and then configure a network policy for Pod security.
In this lab, you learn how to perform the following tasks:
For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.
Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:
Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).
The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
If necessary, copy the Username below and paste it into the Sign in dialog.
You can also find the Username in the Lab Details panel.
Click Next.
Copy the Password below and paste it into the Welcome dialog.
You can also find the Password in the Lab Details panel.
Click Next.
Click through the subsequent pages:
After a few moments, the Google Cloud console opens in this tab.
After you complete the initial sign-in steps, the project dashboard appears.
Google Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud.
Google Cloud Shell provides command-line access to your Google Cloud resources.
In Cloud console, on the top right toolbar, click the Open Cloud Shell button.
Click Continue.
It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
Output:
Example output:
Output:
Example output:
In this task, you create a private cluster, consider the options for how private to make it, and then compare your private cluster to your original cluster.
In a private cluster, the nodes have internal RFC 1918 IP addresses only, which ensures that their workloads are isolated from the public Internet. The nodes in a non-private cluster have external IP addresses, potentially allowing traffic to and from the internet.
private-cluster
.You may have to scroll down to see this option.
This setting allows you the range of addresses that can access the cluster externally. When this checkbox is not selected, you can access kubectl
only from within the Google Cloud network. In this lab, you will only access kubectl
through the Google Cloud network but you will modify this setting later.
The following values appear only under the private cluster:
You have several options to lock down your cluster to varying degrees:
Without public IP addresses, code running on the nodes can't access the public internet unless you configure a NAT gateway such as Cloud NAT.
You might use private clusters to provide services such as internal APIs that are meant only to be accessed by resources inside your network. For example, the resources might be private tools that only your company uses. Or they might be backend services accessed by your frontend services, and perhaps only those frontend services are accessed directly by external customers or users. In such cases, private clusters are a good way to reduce the surface area of attack for your application.
Click Check my progress to verify the objective.
After cluster creation, you might want to issue commands to your cluster from outside Google Cloud. For example, you might decide that only your corporate network should issue commands to your cluster control plane. Unfortunately, you didn't specify the authorized network on cluster creation.
In this task, you add an authorized network for cluster control plane access.
Corporate
.192.168.1.0/24
.Multiple networks can be added here if necessary, but no more than 50 CIDR ranges.
Click Check my progress to verify the objective.
In this task, you create a cluster network policy to restrict communication between the Pods. A zero-trust zone is important to prevent lateral attacks within the cluster when an intruder compromises one of the Pods.
--enable-network-policy
to the parameters you have used in previous labs. This flag allows this cluster to use cluster network policies:app=hello
, and expose the web application internally in the cluster:Let's create a sample NetworkPolicy manifest file called hello-allow-from-foo.yaml
. This manifest file defines an ingress policy that allows access to Pods labeled app: hello
from Pods labeled app: foo
.
hello-allow-from-foo.yaml
with nano using the following command:hello-allow-from-foo.yaml
file:Press Ctrl+O, and then press Enter to save your edited file.
Press Ctrl+X to exit the nano text editor.
Create an ingress policy:
Output:
test-1
with the label app=foo
and get a shell in the Pod:--stdin
( alternatively -i
) creates an interactive session attached to STDIN on the container.
--tty
( alternatively -t
) allocates a TTY for each container in the pod.
--rm
instructs Kubernetes to treat this as a temporary Pod that will be removed as soon as it completes its startup task. As this is an interactive session it will be removed as soon as the user exits the session.
--label
( alternatively -l
) adds a set of labels to the pod.
--restart
defines the restart policy for the Pod.
Output:
Type exit and press ENTER to leave the shell.
Now you will run a different Pod using the same Pod name but using a label, app=other
, that does not match the podSelector in the active network policy. This Pod should not have the ability to access the hello-web application.
The request times out.
Output:
You can restrict outgoing (egress) traffic as you do incoming traffic. However, in order to query internal hostnames (such as hello-web) or external hostnames (such as www.example.com), you must allow DNS resolution in your egress network policies. DNS traffic occurs on port 53, using TCP and UDP protocols.
Let's create a NetworkPolicy manifest file foo-allow-to-hello.yaml
. This file defines a policy that permits Pods with the label app: foo
to communicate with Pods labeled app: hello
on any port number, and allows the Pods labeled app: foo
to communicate to any computer on UDP port 53, which is used for DNS resolution. Without the DNS port open, you will not be able to resolve the hostnames.
foo-allow-to-hello.yaml
with nano using the following command:foo-allow-to-hello.yaml
file:Press Ctrl+O, and then press Enter to save your edited file.
Press Ctrl+X to exit the nano text editor.
Create an egress policy:
Output:
app=foo
label and get a shell prompt inside the container:Output:
This fails because none of the Network policies you have defined allow traffic to Pods labeled app: hello-2
.
This fails because the network policies do not allow external http traffic (tcp port 80).
Click Check my progress to verify the objective.
When you have completed your lab, click End Lab. Google Cloud Skills Boost removes the resources you’ve used and cleans the account for you.
You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.
The number of stars indicates the following:
You can close the dialog box if you don't want to provide feedback.
For feedback, suggestions, or corrections, please use the Support tab.
Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.
This content is not currently available
We will notify you via email when it becomes available
Great!
We will contact you via email if it becomes available
One lab at a time
Confirm to end all existing labs and start this one