
Before you begin
- Labs create a Google Cloud project and resources for a fixed time
- Labs have a time limit and no pause feature. If you end the lab, you'll have to restart from the beginning.
- On the top left of your screen, click Start lab to begin
Compute infrastructure has been created
/ 100
Networking tags have been added
/ 100
Firewall rules have been created
/ 100
This lab is the first in a series of labs, each of which is intended to build skills related to the setup and operation of Anthos clusters on bare metal. You prepare infrastructure, create the admin workstation, create admin and user clusters, deploy workloads, and manage observability configurations.
Anthos clusters on bare metal can indeed run on bare metal servers, but can also run on virtual machines in VMware, AWS, or even GCE. Doing bare metal installs doesn't take direct advantage of VMware, AWS, or GKE APIs, it uses a more generic approach to making Anthos work on your cluster.
In this lab, you run Anthos clusters on bare metal atop of GCE VMs. This does require a little extra work as the load balancer VMs need Layer 2 connectivity, so you need to configure the VMs to use VXLAN, which encapsulates Layer 2 connections on a Layer 3 network. In a pure bare metal deployment, you would just skip this step and everything else would remain the same.
In this lab, you learn how to perform the following tasks:
In this task, you use Qwiklabs and perform initialization steps for your lab.
For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.
Sign in to Qwiklabs using an incognito window.
Note the lab's access time (for example, 1:15:00
), and make sure you can finish within that time.
There is no pause feature. You can restart if needed, but you have to start at the beginning.
When ready, click Start lab.
Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.
Click Open Google Console.
Click Use another account and copy/paste credentials for this lab into the prompts.
If you use other credentials, you'll receive errors or incur charges.
Accept the terms and skip the recovery resource page.
After you complete the initial sign-in steps, the project dashboard appears.
Google Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud.
Google Cloud Shell provides command-line access to your Google Cloud resources.
In Cloud console, on the top right toolbar, click the Open Cloud Shell button.
Click Continue.
It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
Output:
Example output:
Output:
Example output:
To reflect real-world best practices, your project has been configured as follows:
In the Console, go to Navigation > VPC network > VPC networks and verify that you have a single custom subnet network. It should look like this:
Go to Navigation > VPC network > Firewall and verify that you have two firewall rules - one that only allows inbound SSH traffic via IAP, and one that enables vxlan traffic. It should look like this:
When you create Anthos clusters in Google Cloud, AWS, or VMware, you typically use an environment-specific installation process that takes advantage of native APIs.
When you create a bare metal cluster, the installation process doesn't automatically create machines for you (typically, they are physical machines so they can't be created out of thin air). That doesn't mean, however, that you can't create "bare metal" clusters running on VMs in any of those environments.
In this lab, you create a "bare metal" cluster on GCE VMs. It behaves almost identically to a bare metal cluster running on physical devices in your data center. The lab instructions highlight where the installation or administration deviates from a pure bare metal scenario.
You will be building two clusters in the following labs (see diagram below). The admin cluster, which you build in this lab, has a control plane node and no worker nodes. The user cluster has a control plane node, and a worker node. In a production environment, you might consider using three nodes for high availability of both the data and the control plane.
In Cloud Shell, initialize environment variables you use in later commands:
Create the VM to be used as your admin workstation:
Create the VMs used as cluster servers:
In the Console, go to Navigation > Compute Engine > VM instances and confirm the VMs have been created. It should look like this:
In Cloud Shell, assign appropriate network tags to the servers, based on their roles (these tags are used to control firewall rule application):
Disable Uncomplicated Firewall (UFW) on each of the servers:
Configure each VM to implement vxlan functionality; each VM gets an IP address in the 10.200.0.x range:
Check the vxlan IPs that have been associated with each of the VMs:
You should see output that looks like this:
Create the firewall rules that allow traffic to the control plane servers:
Create the firewall rules that allow inbound traffic to the worker nodes:
Create the firewall rules that allow inbound traffic to the load balancer nodes. In our case, the load balancer is hosted in the same node as the admin cluster control plane node.
Create the firewall rules that allow multi-cluster traffic. This allows the
communication between the admin and the user cluster. If you were deploying an
Anthos cluster on bare metal of type hybrid
or standalone
with no other
user clusters, you would not need these firewall rules.
In the Console, confirm the creation of the firewall rules by visiting Navigation > VPC network > Firewall. It should look like this:
Congratulations! You have set up your Google Cloud project, your network, and the servers that will be used by your bare metal cluster.
In this task you prepare your admin workstation. This includes:
You also configure your cluster servers to allow SSH sessions from the admin workstation, so it can do its work.
If you don't already have an open, active Cloud Shell session, open Cloud Shell. Then, initialize key variables in Cloud Shell:
SSH from the Cloud Shell VM into the machine you will use as your admin workstation:
In the SSH session to your admin workstation, set an environment variable:
Install the SDK onto the admin workstation. When prompted, enter the replies shown in the table that follows the command:
Prompt | Value |
---|---|
Installation Directory |
root (default) |
Do you want to help... |
N (default) |
Do you want to continue ($PATH update) |
Y (default) |
Enter a path... |
/root/.bashrc (default) |
Restart your shell, then configure the Application Default Credentials on your server:
Use gcloud to install kubectl on the admin workstation:
When prompted if you want to continue, enter Y
.
Confirm the kubectl is installed and working:
You should see output that looks like this:
Create a new directory for bmctl and related files:
Download and configure the bmctl tool, which you use to create and manage the bare metal clusters:
Confirm that the tool has been installed:
You should see output that looks similar to this:
Download and install Docker:
Confirm that Docker was successfully installed:
You should see output that looks similar to this:
In order for the admin workstation to configure all the servers in your clusters, the bmctl utility must be able to SSH into the servers. You are going to configure the servers to allow this by creating an SSH key pair for the admin workstation, then configuring each cluster server to allow SSH connections using the private key from that key pair.
In your SSH session to the admin workstation, create a new key pair with the following command:
When prompted, enter the following replies:
Prompt | Value |
---|---|
Enter file in which... |
/root/.ssh/id_rsa (default) |
Enter passphrase |
<ENTER> (no passphrase) |
Enter same passphrase... |
<ENTER> (no passphrase) |
Configure all the cluster machines to accept this key for SSH sessions with the following commands:
Install kubectx on the admin workstation with the following commands:
Congratulations! You have set up and configured your admin workstation, and are now ready to use it to create and manage Anthos admin and user clusters.
In this task, you will create your admin cluster. This includes:
In Cloud Shell, where you have an active SSH session into the admin workstation, initialize some key environment variables:
Use the bmctl tool to enable APIs, create service accounts, and generate a configuration file:
Check that the services are activated by going to Navigation > APIs & Services > Dashboard. You should see the enabled services in the list, like this:
If for some reason you don't see the Anthos services listed, it's likely an issue with updating the list. You can search for one of the APIs in the search bar at the top of the screen and see that it's enabled, or you can take it on faith and continue.
Check that the service accounts have been created by going to Navigation > IAM & Admin > Service Accounts. You should see the newly created service accounts:
Check the roles assigned to your service accounts by going to Navigation > IAM & Admin > IAM. You should see the newly created service accounts and their role assignment:
Check the key files for your service accounts have been downloaded:
You should see the newly created service accounts and their role assignment:
While bmctl creates a draft configuration file, you need to make multiple edits to make the file usable. The instructions below walk you through the process.
To avoid mistakes, there are commands that pre-fill the information. If you want to edit the file yourself, you can, just keep in mind that spaces and indents count in YAML files, so be careful that you get positioning correct.
View the generated configuration file:
Modify the config file by updating the following lines with the values suggested below. You can either edit the file by hand in the vi, or you can run the commands provided to update the file for you.
Key | Value |
---|---|
spec:type | admin |
sshPrivateKeyPath | /root/.ssh/id_rsa |
controlPlane:nodePoolSpec:nodes | - address: 10.200.0.3 |
loadBalancer:vips:controlPlaneVIP | 10.200.0.98 |
[automated update commands]
Delete the entire NodePool section of the configuration file. Typically, admin clusters don't need worker nodes. You can do this manually or use the following command to automate the modification:
If you haven't already, review the modified admin cluster configuration file:
Create your admin cluster with the following command:
It will take about 20 minutes for your cluster creation to complete. Wait until the cluster creation is done before moving to the next task.
In order to create the admin cluster, Anthos must execute some scripts that connect to the worker nodes and install the necessary software. Instead of running the scripts directly on the admin-workstation, Anthos creates a temporary Kind cluster on the admin-workstation that runs those scripts as Kubernetes Jobs, and makes sure that the software is installed correctly.
You can find the kubeconfig file under bmctl-workspace/.kindkubeconfig, which you can use to access the Kind Kubernetes API to view logs and debug the admin cluster creation process. To simplify the debugging process, and be able to access the information after the creation has completed, the Kind cluster exports the logs onto the admin-workstation under the bmctl-workspace/abm-admin-cluster/log folder.
In the following task, you learn how to access those logs.
In Cloud Shell, find the logs exported in the creation process:
Investigate all the generated log files:
In addition to the create-cluster.log file, there is another file in that folder:
10.200.0.3: contains all the logs produced by the admin master node. Here, you see checks to verify that the binaries have been copied and installed (including custom tools, Docker, kubeadm, kubectl, kubelet), creation of Kubernetes CA certificates and kubeadm actions like initializing and joining the cluster.View the admin master node logs:
Investigate the preflight checks that bmctl
performs before creating the
cluster:
bmctl check cluster --snapshot --cluster $CLUSTER_NAME --admin-kubeconfig $ADMIN_KUBECONFIG_PATH
Check the connectivity tests for the nodes in your network:
In Cloud Shell configure kubectl to use the newly generated kubeconfig file that points to your admin cluster:
Rename your kubectl context to something a little easier to remember:
Test to make sure you can access and use your admin cluster:
You should see results that look like this:
Verify that the admin cluster has been registered with Anthos hub by visiting Navigation > Kubernetes Engine > Clusters. It should look like this:
In Cloud Shell, create a Kubernetes Service Account on your cluster and grant it the cluster-admin role:
Create a token that you can use to log in to the cluster from the Console:
Select the token in the SSH session (this will copy the token - don't try to copy with CTRL+C).
Find the abm-admin-cluster entry in the cluster list showing in the Console and click the three-dots menu at the far right of the row.
Select Log in, select Token, then paste the token from your Clipboard into the provided field. Click Login.
Congratulations! You have successfully logged in to your Anthos on bare metal admin cluster!
If you get disconnected from Cloud Shell and want to sign back in to the admin workstation:
If you get disconnected from Cloud Shell and want to connect to the admin cluster:
In this lab, you deployed the bare metal infrastructure on GCE and installed the Anthos on bare metal admin cluster. You also learned how to debug the cluster creation and how to run health checks in your cluster. Finally, you logged in through the Google Cloud Console and accessed your on-premises cluster from Google Cloud.
..
When you have completed your lab, click End Lab. Google Cloud Skills Boost removes the resources you’ve used and cleans the account for you.
You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.
The number of stars indicates the following:
You can close the dialog box if you don't want to provide feedback.
For feedback, suggestions, or corrections, please use the Support tab.
Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.
This content is not currently available
We will notify you via email when it becomes available
Great!
We will contact you via email if it becomes available
One lab at a time
Confirm to end all existing labs and start this one