arrow_back

AHYBRID092: Deploying workloads on Anthos clusters on bare metal

ログイン 参加
700 以上のラボとコースにアクセス

AHYBRID092: Deploying workloads on Anthos clusters on bare metal

ラボ 1時間 20分 universal_currency_alt クレジット: 5 show_chart 中級
info このラボでは、学習をサポートする AI ツールが組み込まれている場合があります。
700 以上のラボとコースにアクセス

Overview

This lab is the second in a series of labs, each of which is intended to build skills related to the setup and operation of Anthos clusters on bare metal. In this lab, you start with the admin workstation and admin cluster in place; you then build the user cluster. After the user cluster is running, you deploy stateless and stateful workloads and expose the workloads using LoadBalancer services and Ingresses.

Objectives

In this lab, you learn how to perform the following tasks:

  • Configure and create your Anthos on bare metal user cluster.
  • Launch workloads on your user cluster.
  • Expose L4 and L7 services on your created user cluster using the bundled MetalLB load balancer.
  • Install a CSI driver and deploy stateful workloads.

Setup and requirements

In this task, you use Qwiklabs and perform initialization steps for your lab.

For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.

  1. Sign in to Qwiklabs using an incognito window.

  2. Note the lab's access time (for example, 1:15:00), and make sure you can finish within that time.
    There is no pause feature. You can restart if needed, but you have to start at the beginning.

  3. When ready, click Start lab.

  4. Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.

  5. Click Open Google Console.

  6. Click Use another account and copy/paste credentials for this lab into the prompts.
    If you use other credentials, you'll receive errors or incur charges.

  7. Accept the terms and skip the recovery resource page.

After you complete the initial sign-in steps, the project dashboard appears.

  1. Click Select a project, highlight your Google Cloud Project ID, and click Open to select your project.

Activate Google Cloud Shell

Google Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud.

Google Cloud Shell provides command-line access to your Google Cloud resources.

  1. In Cloud console, on the top right toolbar, click the Open Cloud Shell button.

  2. Click Continue.

It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

  • You can list the active account name with this command:
gcloud auth list

Output:

Credentialed accounts: - @.com (active)

Example output:

Credentialed accounts: - google1623327_student@qwiklabs.net
  • You can list the project ID with this command:
gcloud config list project

Output:

[core] project =

Example output:

[core] project = qwiklabs-gcp-44776a13dea667a6 Note: Full documentation of gcloud is available in the gcloud CLI overview guide .

Task 1. Prepare your environment and connect to the admin cluster

Note: To reflect real-world best practices, your project has been configured as follows:

  • The Default network has been deleted.
  • A customer subnet network has been created.
  • Several firewall rules have been created:
    • abm-allow-cp: allows traffic to the control plane servers.
    • abm-allow-worker: allows inbound traffic to the worker nodes.
    • abm-allow-lb / abm-allow-gfe-to-lb: allows inbound traffic to the load balancer nodes. In our case, the load balancer is hosted in the same node as the admin cluster control plane node.
    • abm-allow-multi: allows multicluster traffic. This allows the communication between the admin and the user cluster.
    • iap: allows traffic from Identity Aware Proxy (IAP), so you can SSH internal VMs without opening port 22 to the internet.
    • vxlan: allow vxlan networking, a network virtualization technology that encapsulates L2 Ethernet frames on an underlying L3 network.
  • Your admin workstation has been created.
  • Your admin cluster has been created.

  1. Set the Zone environment variable

    ZONE={{{ project_0.default_zone| "Zone added at lab start" }}}
  2. In Cloud Shell, SSH into the admin workstation with the following commands:

    export PROJECT_ID=$(gcloud config get-value project) VM_PREFIX=abm VM_WS=$VM_PREFIX-ws gcloud compute ssh --ssh-flag="-A" root@$VM_WS \ --zone ${ZONE} \ --tunnel-through-iap
  3. If prompted, answer Y, and press ENTER twice for an empty passphrase.

  4. Change directories to the baremetal directory, then initialize environment variables needed for later commands:

    cd baremetal export PROJECT_ID=$(gcloud config get-value project)
  5. Set the Zone environment variable

    CLUSTER_ZONE={{{ project_0.default_zone| "Zone added at lab start" }}}
  6. Configure kubectl to use the generated kubeconfig file that points to your admin cluster:

    export KUBECONFIG=$KUBECONFIG:~/baremetal/bmctl-workspace/abm-admin-cluster/abm-admin-cluster-kubeconfig
  7. Rename your kubectl context to something a little easier to remember:

    kubectx admin=.
  8. Test to make sure you can access and use your admin cluster:

    kubectl get nodes

    You should see results that look like this:

    NAME STATUS ROLES AGE VERSION abm-admin-cp1 Ready control-plane,master 12m v1.23.5-gke.1504
  9. Verify that the admin cluster has been registered with Anthos hub by visiting Navigation > Kubernetes Engine > Clusters. It should look like this:

  10. In Cloud Shell, create a Kubernetes Service Account on your cluster and grant it the cluster-admin role:

    kubectl create serviceaccount -n kube-system admin-user kubectl create clusterrolebinding admin-user-binding \ --clusterrole cluster-admin --serviceaccount kube-system:admin-user
  11. Create a token that you can use to log in to the cluster from the Console:

    kubectl create token admin-user -n kube-system
  12. Select the token in the SSH session (this will copy the token - don't try to copy with CTRL+C).

  13. Find the abm-admin-cluster entry in the cluster list showing in the Console and click the three-dots menu at the far right of the row.

  14. Select Log in, select Token, then paste the token from your Clipboard into the provided field. Click Login. When you're done, it should look like this:

Note: Now that you can connect from the the admin workstation to the admin cluster, you are ready to create your user cluster.

Task 2. Create your user cluster

  1. In Cloud Shell, while SSH'd into the admin workstations, create the config file for the user cluster:

    bmctl create config -c abm-user-cluster-central --project-id=$PROJECT_ID
  2. View the user cluster configuration file that was created above:

    cat bmctl-workspace/abm-user-cluster-central/abm-user-cluster-central.yaml

You can edit the configuration file manually, but for the purposes of this lab, you've been provided commands that will edit the file for you.

  1. Delete the credentials file references from the configuration file:

    tail -n +11 bmctl-workspace/abm-user-cluster-central/abm-user-cluster-central.yaml > temp_file && mv temp_file bmctl-workspace/abm-user-cluster-central/abm-user-cluster-central.yaml
  2. Add a line to the beginning of the config file that points to the private ssh key:

    sed -i '1 i\sshPrivateKeyPath: /root/.ssh/id_rsa' bmctl-workspace/abm-user-cluster-central/abm-user-cluster-central.yaml
  3. Change the cluster type in the config file to user:

    sed -r -i "s|type: hybrid|type: user|g" bmctl-workspace/abm-user-cluster-central/abm-user-cluster-central.yaml
  4. Set the IP for the user cluster control plane's VM node:

    sed -r -i "s|- address: <Machine 1 IP>|- address: 10.200.0.4|g" bmctl-workspace/abm-user-cluster-central/abm-user-cluster-central.yaml
  5. Set the IP for the user cluster's control plane API server:

    sed -r -i "s|controlPlaneVIP: 10.0.0.8|controlPlaneVIP: 10.200.0.99|g" bmctl-workspace/abm-user-cluster-central/abm-user-cluster-central.yaml
  6. Set up the IP for the user cluster's Ingress:

    sed -r -i "s|# ingressVIP: 10.0.0.2|ingressVIP: 10.200.0.100|g" bmctl-workspace/abm-user-cluster-central/abm-user-cluster-central.yaml
  7. Configure the IPs that will be associated when K8s LoadBalancer services are created:

    sed -r -i "s|# addressPools:|addressPools:|g" bmctl-workspace/abm-user-cluster-central/abm-user-cluster-central.yaml sed -r -i "s|# - name: pool1|- name: pool1|g" bmctl-workspace/abm-user-cluster-central/abm-user-cluster-central.yaml sed -r -i "s|# addresses:| addresses:|g" bmctl-workspace/abm-user-cluster-central/abm-user-cluster-central.yaml sed -r -i "s|# - 10.0.0.1-10.0.0.4| - 10.200.0.100-10.200.0.200|g" bmctl-workspace/abm-user-cluster-central/abm-user-cluster-central.yaml
  8. Enable infrastructure and application logging for your user cluster:

    sed -r -i "s|# disableCloudAuditLogging: false|disableCloudAuditLogging: false|g" bmctl-workspace/abm-user-cluster-central/abm-user-cluster-central.yaml sed -r -i "s|# enableApplication: false|enableApplication: true|g" bmctl-workspace/abm-user-cluster-central/abm-user-cluster-central.yaml
  9. Set the name for the user cluster's worker pool and the IP for the worker node:

    sed -r -i "s|name: node-pool-1|name: user-cluster-central-pool-1|g" bmctl-workspace/abm-user-cluster-central/abm-user-cluster-central.yaml sed -r -i "s|- address: <Machine 2 IP>|- address: 10.200.0.5|g" bmctl-workspace/abm-user-cluster-central/abm-user-cluster-central.yaml sed -r -i "s|- address: <Machine 3 IP>|# - address: <Machine 3 IP>|g" bmctl-workspace/abm-user-cluster-central/abm-user-cluster-central.yaml
  10. Review the user cluster configuration file fully configured:

    cat bmctl-workspace/abm-user-cluster-central/abm-user-cluster-central.yaml
  11. Create the user cluster; relax, this will take 10 minutes:

    bmctl create cluster -c abm-user-cluster-central --kubeconfig bmctl-workspace/abm-admin-cluster/abm-admin-cluster-kubeconfig

Task 3. Access your user cluster

Make sure that your user cluster is fully created before continuing.

  1. Configure kubectl to speak to the user cluster:

    export KUBECONFIG=$KUBECONFIG:~/baremetal/bmctl-workspace/abm-user-cluster-central/abm-user-cluster-central-kubeconfig kubectx abm-user-cluster-central-admin@abm-user-cluster-central kubectx user-central=.
  2. Test to make sure you can access and use your user cluster:

    kubectl get nodes

    You should see results that look like this:

    NAME STATUS ROLES AGE VERSION abm-user-cp1 Ready control-plane,master 27m v1.23.5-gke.1504 abm-user-w1 Ready 25m v1.23.5-gke.1504

    Notice that you have a node for the control plane and another one for your data plane.

  3. Verify that the user cluster has been registered with Anthos hub by visiting Navigation > Kubernetes Engine > Clusters. It should look like this:

  4. In Cloud Shell, create a Kubernetes Service Account on your cluster and grant it the cluster-admin role:

    kubectl create serviceaccount -n kube-system admin-user kubectl create clusterrolebinding admin-user-binding \ --clusterrole cluster-admin --serviceaccount kube-system:admin-user
  5. Get a token that you can use to log in to the cluster from the Console:

    kubectl create token admin-user -n kube-system
  6. Select the token in the SSH session (this will copy the token - don't try to copy with CTRL+C).

  7. Find the abm-user-cluster-central entry in the cluster list showing in the Console and click the three-dots menu at the far right of the row.

  8. Select Log in, select Token, then paste the token from your Clipboard into the provided field. Click Login. When you're done, it should look like this:

Task 4. Deploy and manage applications in your Anthos user cluster

Deploy an application and expose it via a L4 load balancer Service

  1. In Cloud Shell, create a deployment for the hello-app application:

    kubectl create deployment hello-app --image=gcr.io/google-samples/hello-app:2.0
  2. In the UI, visit Navigation > Kubernetes Engine > Workloads. On top of the table displaying all the cluster workloads, add a filter for the Cluster to be abm-user-cluster-central. Find the hello-app deployment that you just created.

  3. In Cloud Shell, create a Kubernetes Service of type LoadBalancer to access the app:

    kubectl expose deployment hello-app --name hello-app-service --type LoadBalancer --port 80 --target-port=8080
  4. In the UI, visit Navigation > Kubernetes Engine > Gateways, Services & Ingress. Find the hello-app service that you just created. You can see that it contains an external IP in the range that you configured earlier (10.200.0.100-10.200.0.200) in the user-cluster creation process.

  5. In Cloud Shell, get the services and check that you have the same external IP. Copy the IP for this service, as you need it in the next task.

    kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-app-service LoadBalancer 10.96.3.48 10.200.0.101 80:32014/TCP 10m kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d23h
  6. Access the external IP provided by the hello-app-service:

    curl 10.200.0.101 Hello, world! Version: 2.0.0 Hostname: hello-app-7c5698d447-qpwwr
Note: Notice that this IP is only accessible within VMs that are deployed in the same VXLAN. If you want internet routable IPs, you need to provide them in the yaml file used for the creation of the user cluster.

Deploy an application and expose it via a L7 load balancer Ingress

  1. Create a second application deployment:

    kubectl create deployment hello-kubernetes --image=gcr.io/google-samples/node-hello:1.0
  2. Create a Kubernetes Service of type NodePort to access the app. Notice that no external IP is associated:

    kubectl expose deployment hello-kubernetes --name hello-kubernetes-service --type NodePort --port 32123 --target-port=8080
  3. Create a Kubernetes Ingress resource to route traffic between the two services:

    cat <<EOF > nginx-l7.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx-l7 spec: rules: - http: paths: - path: /greet-the-world pathType: Exact backend: service: name: hello-app-service port: number: 80 - path: /greet-kubernetes pathType: Exact backend: service: name: hello-kubernetes-service port: number: 32123 EOF kubectl apply -f nginx-l7.yaml
  4. Access the routes exposed by the Ingress to point to the hello-app-service:

    curl 10.200.0.100/greet-the-world Hello, world! Version: 2.0.0 Hostname: hello-app-7c5698d447-qpwwr
  5. Access the routes exposed by the Ingress to point to the hello-kubernetes-service:

    curl 10.200.0.100/greet-kubernetes Hello Kubernetes!

Task 5. Deploy a stateful application

Anthos clusters on bare metal are compatible with Container Storage Interface (CSI) v1.0 drivers. CSI is an open-standard API supported by many major storage vendors that enables Kubernetes to expose arbitrary storage systems to containerized workloads.

To use a CSI driver, you need to install the driver and you need to create a Kubernetes StorageClass. You set the CSI driver as the provisioner for the StorageClass. Then, you can set the StorageClass as the cluster's default, or configure your workloads to explicitly use the StorageClass.

Installing a vendor's CSI driver

Storage vendors develop their own CSI drivers, and they are responsible for providing installation instructions. In simple cases, installation might only involve deploying manifests to your clusters. See the list of CSI drivers in the CSI documentation.

In this lab, you install theCompute Engine Persistent Disk CSI driver, since the Anthos bare metal deployment is running on GCE and needs that type of driver to communicate with the GCE persistent disks. For production storage, we recommend installing a CSI driver from an Anthos Ready storage partner.

  1. Initialize the environment variables used in the installation commands:

    export GOPATH=~/baremetal export GCE_PD_SA_NAME=my-gce-pd-csi-sa export GCE_PD_SA_DIR=~/baremetal export ENABLE_KMS=false export PROJECT=$(gcloud config get-value project)
  2. Check to see if any CSI drivers have been installed on the user cluster nodes:

    kubectl get csinodes \ -o jsonpath='{range .items[*]}{.metadata.name} {.spec.drivers} {"\n"}{end}'

    You should see output that indicates there are no CSI drivers installed.

  3. Clone the driver to your local machine:

    git clone https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver $GOPATH/src/sigs.k8s.io/gcp-compute-persistent-disk-csi-driver -b release-1.3
  4. Take a look at the setup-project.sh file to understand what setup actions need to be taken with your project:

    cat src/sigs.k8s.io/gcp-compute-persistent-disk-csi-driver/deploy/setup-project.sh

    Note that the script is creating a service account, downloading the corresponding key file, defining a customer role, and assigning roles to the service account.

  5. Run the project setup script:

    src/sigs.k8s.io/gcp-compute-persistent-disk-csi-driver/deploy/setup-project.sh
  6. Deploy the CSI driver into your cluster:

    export GCE_PD_SA_DIR=~/baremetal export GCE_PD_DRIVER_VERSION=stable src/sigs.k8s.io/gcp-compute-persistent-disk-csi-driver/deploy/kubernetes/deploy-driver.sh
  7. Check the installed CSI drivers to verify that GCE PD driver has been installed on the user cluster nodes:

    kubectl get csinodes \ -o jsonpath='{range .items[*]} {.metadata.name}{": "} {range .spec.drivers[*]} {.name}{"\n"} {end}{end}'

Using the installed CSI driver

  1. Create a new StorageClass on your user cluster, referencing your driver in the provisioner field:

    cat <<EOF > pd-storage-class.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gce-pd annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: pd.csi.storage.gke.io # CSI driver parameters: # You provide vendor-specific parameters to this specification type: pd-standard # Be sure to follow the vendor's instructions, in our case pd-ssd, pd-standard, or pd-balanced volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true EOF kubectl apply -f pd-storage-class.yaml
Note: When you request storage with a PersistentVolumeClaim (PVC), you can specify a StorageClass. If you do not specify a StorageClass, the default StorageClass is used if one is configured in the cluster. Anthos clusters on bare metal do not configure a default StorageClass.

As a cluster administrator, you might want to change set default storage class, so that unspecified requests use the StorageClass of your choice. To accomplish that, notice the annotation that was added in the creation of the StorageClass above.

storageclass.kubernetes.io/is-default-class: "true"
  1. Deploy an application comprised of a PersistentVolumeClaim (PVC) and a pod that uses that PVC. A persistent volume will be provisioned via the new StorageClass and CSI driver.

    cat <<EOF > stateful-app.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: podpvc spec: accessModes: - ReadWriteOnce storageClassName: gce-pd resources: requests: storage: 6Gi --- apiVersion: v1 kind: Pod metadata: name: web-server spec: containers: - name: web-server image: nginx volumeMounts: - mountPath: /var/lib/www/html name: mypvc volumes: - name: mypvc persistentVolumeClaim: claimName: podpvc readOnly: false EOF kubectl apply -f stateful-app.yaml
  2. In the UI, visit Navigation > Kubernetes Engine > Storage. Check the Persistent Volume Claims list, and you should see a new PVC, of storage class gce-pd, called podpvc.

  3. In the UI, visit Navigation > Kubernetes Engine > Workloads. Find the web-server pod that you just created and verify that is running. (It may take 1-2 minutes for the pod to become fully operational - you can wait and refresh the page to see the results).

Task 6. Troubleshooting

  1. If you get disconnected from Cloud Shell and want to sign back into the admin workstation:

    gcloud compute ssh --ssh-flag="-A" root@abm-ws \ --zone us-central1-a \ --tunnel-through-iap
  2. If you get disconnected from Cloud Shell and want to connect to the admin cluster:

    # From the admin workstation (root@abm-ws) export KUBECONFIG=$KUBECONFIG:~/baremetal/bmctl-workspace/abm-admin-cluster/abm-admin-cluster-kubeconfig kubectx admin kubectl get nodes
  3. If you get disconnected from Cloud Shell and want to connect to the user cluster:

    # From the admin workstation (root@abm-ws) export KUBECONFIG=$KUBECONFIG:~/baremetal/bmctl-workspace/abm-user-cluster-central/abm-user-cluster-central-kubeconfig kubectx user-central kubectl get nodes

Review

In this lab, you used the provisioned bare metal infrastructure to install an Anthos cluster on bare metal user cluster. You also deployed applications running in the user cluster load balancing in L4 with services exposed in the MetalLB load balancer, as well as L7 load balancing with the Ingress resource. In addition, you installed a CSI driver and deployed a stateful workload.

End your lab

When you have completed your lab, click End Lab. Google Cloud Skills Boost removes the resources you’ve used and cleans the account for you.

You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.

The number of stars indicates the following:

  • 1 star = Very dissatisfied
  • 2 stars = Dissatisfied
  • 3 stars = Neutral
  • 4 stars = Satisfied
  • 5 stars = Very satisfied

You can close the dialog box if you don't want to provide feedback.

For feedback, suggestions, or corrections, please use the Support tab.

Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

始める前に

  1. ラボでは、Google Cloud プロジェクトとリソースを一定の時間利用します
  2. ラボには時間制限があり、一時停止機能はありません。ラボを終了した場合は、最初からやり直す必要があります。
  3. 画面左上の [ラボを開始] をクリックして開始します

このコンテンツは現在ご利用いただけません

利用可能になりましたら、メールでお知らせいたします

ありがとうございます。

利用可能になりましたら、メールでご連絡いたします

1 回に 1 つのラボ

既存のラボをすべて終了して、このラボを開始することを確認してください

シークレット ブラウジングを使用してラボを実行する

このラボの実行には、シークレット モードまたはシークレット ブラウジング ウィンドウを使用してください。これにより、個人アカウントと受講者アカウントの競合を防ぎ、個人アカウントに追加料金が発生することを防ぎます。