arrow_back

Automate GKE Configurations with Config Sync and Policy Controller

Sign in Join
Get access to 700+ labs and courses

Automate GKE Configurations with Config Sync and Policy Controller

Lab 1 hour 30 minutes universal_currency_alt 7 Credits show_chart Advanced
info This lab may incorporate AI tools to support your learning.
Get access to 700+ labs and courses

GSP1241

Overview

Google Kubernetes Engine (GKE) Enterprise edition comes with two features to help administrators streamline and automate the GKE Enterprise resource management process:

  • Config Sync is a GitOps-driven service that automates the synchronization of configurations stored in a Git repository with the Kubernetes cluster.

  • Policy Controller checks, audits, and enforces your clusters' compliance with policies related to security, regulations, or business rules.

Using Config Sync and Policy Controller together allows for automated management of Kubernetes cluster configuration and policy enforcement. This integrated approach simplifies cluster management, strengthens security posture, and ensures continuous compliance, allowing you to confidently manage Kubernetes deployments across your fleet.

In this lab, you will use Config Sync to automate configuration and Policy Controller to enforce policies on GKE Enterprise resources. This provides an efficient and secure way to maintain Kubernetes infrastructure.

Objectives

In this lab, you learn how to perform the following tasks:

  • Configure Policy Controller and Config Sync
  • Deploy a sample application on two GKE clusters using Policy Controller and Config Sync
  • Configure Policy Controller constraints on the clusters and view violations
  • Create a custom constraint template and constraint
  • Resolve the constraint violations by creating the required resources on the GKE clusters

Setup and requirements

Before you click the Start Lab button

Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.

This Qwiklabs hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.

What you need

To complete this lab, you need:

  • Access to a standard internet browser (Chrome browser recommended).
  • Time to complete the lab.

Note: If you already have your own personal Google Cloud account or project, do not use it for this lab.

Note: If you are using a Pixelbook, open an Incognito window to run this lab.

How to start your lab and sign in to the Google Cloud Console

  1. Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is a panel populated with the temporary credentials that you must use for this lab.

  2. Copy the username, and then click Open Google Console. The lab spins up resources, and then opens another tab that shows the Sign in page.

    Tip: Open the tabs in separate windows, side-by-side.

  3. In the Sign in page, paste the username that you copied from the Connection Details panel. Then copy and paste the password.

    Important: You must use the credentials from the Connection Details panel. Do not use your Qwiklabs credentials. If you have your own Google Cloud account, do not use it for this lab (avoids incurring charges).

  4. Click through the subsequent pages:

    • Accept the terms and conditions.
    • Do not add recovery options or two-factor authentication (because this is a temporary account).
    • Do not sign up for free trials.

After a few moments, the Cloud Console opens in this tab.

Activate Cloud Shell

Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.

In the Cloud Console, in the top right toolbar, click the Activate Cloud Shell button.

Click Continue.

It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

You can list the active account name with this command:

gcloud auth list

(Output)

Credentialed accounts: - <myaccount>@<mydomain>.com (active)

(Example output)

Credentialed accounts: - google1623327_student@qwiklabs.net

You can list the project ID with this command:

gcloud config list project

(Output)

[core] project = <project_ID>

(Example output)

[core] project = qwiklabs-gcp-44776a13dea667a6

Task 1. Create GKE clusters and enable the GKE Service Mesh

In this task, you complete some prework to make the subsequent sections easier to work through. This includes setting environment variables, copying the necessary lab files, and creating contexts for both of the GKE clusters.

Enable the required GKE enterprise APIs

  1. Enable the required APIs:
gcloud services enable \ --project={{{primary_project.project_id|PROJECT_ID}}} \ anthos.googleapis.com \ anthosconfigmanagement.googleapis.com \ container.googleapis.com \ stackdriver.googleapis.com \ monitoring.googleapis.com \ cloudtrace.googleapis.com \ logging.googleapis.com \ meshca.googleapis.com \ meshtelemetry.googleapis.com \ meshconfig.googleapis.com \ multiclustermetering.googleapis.com \ multiclusteringress.googleapis.com \ multiclusterservicediscovery.googleapis.com \ iamcredentials.googleapis.com \ iam.googleapis.com \ gkeconnect.googleapis.com \ gkehub.googleapis.com \ compute.googleapis.com \ sourcerepo.googleapis.com \ osconfig.googleapis.com gcloud services enable \ --project={{{primary_project.project_id|PROJECT_ID}}} \ trafficdirector.googleapis.com \ networkservices.googleapis.com \ mesh.googleapis.com \ cloudresourcemanager.googleapis.com

Create two GKE clusters

  1. Create the first GKE cluster (with the --async flag to avoid waiting for the first cluster to provision) with authorized networks:
gcloud container clusters create "gke-cluster-1" \ --node-locations {{{primary_project.default_zone|ZONE}}} \ --location {{{primary_project.default_region|REGION}}} \ --num-nodes "2" --min-nodes "2" --max-nodes "2" \ --workload-pool "{{{primary_project.project_id|PROJECT_ID}}}.svc.id.goog" \ --enable-ip-alias \ --machine-type "e2-standard-4" \ --node-labels mesh_id=proj-{{{primary_project.startup_script.project_number | PROJECT_NUMBER}}} \ --labels mesh_id=proj-{{{primary_project.startup_script.project_number | PROJECT_NUMBER}}} \ --fleet-project={{{primary_project.project_id|PROJECT_ID}}} --async
  1. Create a second cluster named gke-cluster-2:
gcloud container clusters create "gke-cluster-2" \ --node-locations {{{primary_project.default_zone|ZONE}}} \ --location {{{primary_project.default_region|REGION}}} \ --num-nodes "2" --min-nodes "2" --max-nodes "2" \ --workload-pool "{{{primary_project.project_id|PROJECT_ID}}}.svc.id.goog" \ --enable-ip-alias \ --machine-type "e2-standard-4" \ --node-labels mesh_id=proj-{{{primary_project.startup_script.project_number | PROJECT_NUMBER}}} \ --labels mesh_id=proj-{{{primary_project.startup_script.project_number | PROJECT_NUMBER}}} \ --fleet-project={{{primary_project.project_id|PROJECT_ID}}} Note: It can take up to 10 minutes to provision the GKE clusters.
  1. Verify that both the clusters are in running state:
gcloud container clusters list
  1. Create a WORKDIR to store all associated files for this tutorial:
mkdir -p secure-gke && cd secure-gke && export WORKDIR=$(pwd)

Enable the GKE Service Mesh fleet feature

In this section, you install GKE Service Mesh on the two GKE clusters and configure the clusters for cross-cluster service discovery.

For gke-cluster-1

  1. Enable mesh:
gcloud storage cp -r gs://spls/gsp1241/k8s/ ~ gcloud beta container hub mesh enable --project={{{primary_project.project_id|PROJECT_ID}}}
  1. Get cluster credentials:
gcloud container clusters get-credentials gke-cluster-1 --zone {{{primary_project.default_region|REGION}}}
  1. Verify CRD is established in the cluster:
for NUM in {1..60} ; do kubectl get crd | grep controlplanerevisions.mesh.cloud.google.com && break sleep 10 done kubectl wait --for=condition=established crd controlplanerevisions.mesh.cloud.google.com --timeout=10m

The output should be similar to the following:

controlplanerevisions.mesh.cloud.google.com 2024-03-18T16:03:10Z customresourcedefinition.apiextensions.k8s.io/controlplanerevisions.mesh.cloud.google.com condition met Note: It can take up to 10 minutes to established CRD.
  1. Apply the mesh_id label:
gcloud container clusters update gke-cluster-1 \ --project {{{primary_project.project_id|PROJECT_ID}}} \ --region {{{primary_project.default_region|REGION}}} \ --update-labels=mesh_id=proj-{{{primary_project.startup_script.project_number | PROJECT_NUMBER}}}
  1. Create the istio namespace and apply Control Plane CR:
kubectl apply -f ~/k8s/namespace-istio-system.yaml kubectl apply -f ~/k8s/controlplanerevision-asm-managed.yaml
  1. Verify that the control plane is provisioned:
kubectl wait --for=condition=ProvisioningFinished controlplanerevision asm-managed -n istio-system --timeout 600s

The output should be similar to the following:

controlplanerevision.mesh.cloud.google.com/asm-managed condition met Note: It can take up to 10 minutes to meet the condition. Re-run the command if you do not get the expected output.
  1. Create the ASM gateway namespace and apply ASM Gateway:
kubectl apply -f ~/k8s/namespace-asm-gateways.yaml kubectl apply -f ~/k8s/asm-ingressgateway.yaml

For gke-cluster-2

  1. Get the cluster credentials:
gcloud container clusters get-credentials gke-cluster-2 --zone {{{primary_project.default_region|REGION}}}
  1. Verify CRD is established in the cluster:
for NUM in {1..60} ; do kubectl get crd | grep controlplanerevisions.mesh.cloud.google.com && break sleep 10 done kubectl wait --for=condition=established crd controlplanerevisions.mesh.cloud.google.com --timeout=10m

The output should be similar to the following:

controlplanerevisions.mesh.cloud.google.com 2024-03-18T16:03:10Z customresourcedefinition.apiextensions.k8s.io/controlplanerevisions.mesh.cloud.google.com condition met Note: It can take up to 10 minutes to established CRD.
  1. Apply the mesh_id label:
gcloud container clusters update gke-cluster-2 \ --project {{{primary_project.project_id|PROJECT_ID}}} \ --region {{{primary_project.default_region|REGION}}} \ --update-labels=mesh_id=proj-{{{primary_project.startup_script.project_number | PROJECT_NUMBER}}}
  1. Create istio namespace and apply Control Plane CR:
kubectl apply -f ~/k8s/namespace-istio-system.yaml kubectl apply -f ~/k8s/controlplanerevision-asm-managed.yaml
  1. Verify that the control plane is provisioned:
kubectl wait --for=condition=ProvisioningFinished controlplanerevision asm-managed -n istio-system --timeout 600s

The output should be similar to the following:

controlplanerevision.mesh.cloud.google.com/asm-managed condition met Note: It can take up to 10 minutes to met the condition. Re-run the command if you do not get the expected output.
  1. Create the ASM gateway namespace and apply ASM Gateway:
kubectl apply -f ~/k8s/namespace-asm-gateways.yaml kubectl apply -f ~/k8s/asm-ingressgateway.yaml

Prepare the Config Sync git repository via the Configuration Management.

  1. Create an IAM policy binding between the Kubernetes service account and the Google service account.
gcloud --project={{{primary_project.project_id|PROJECT_ID}}} iam service-accounts add-iam-policy-binding \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:{{{primary_project.project_id|PROJECT_ID}}}.svc.id.goog[config-management-system/root-reconciler]" \ asm-reader-sa@{{{primary_project.project_id|PROJECT_ID}}}.iam.gserviceaccount.com

The Kubernetes service account is not created until you configure Config Sync for the first time. This binding lets the Config Sync Kubernetes service account act as the Google service account.

  1. Configure your Git client:
git config --global user.email "you@example.com" git config --global user.name "Your Name"

Click Check my progress to verify the objective. Perform initial set up task

Task 2. Install Config Sync

With Config Sync, you can manage Kubernetes resources with configuration files stored in a source of truth. Config Sync supports Git repositories, OCI images, and Helm charts as a source of truth. This page shows you how to enable and configure Config Sync so that it syncs from your root repository.

  1. In the Google Cloud console, go to Kubernetes Engine > Config.

  2. In the Dashboard tabbed page, click Install Config Sync.

  3. In Config Sync, select Manual upgrades.

Leave the rest of the fields with their default values.

  1. In the Available Clusters table, select both clusters and click Install Config Sync.

After a few minutes, go to the Settings tab. You should see Status is enabled for both clusters.

  1. On the Dashboard tabbed page, click Deploy Package.

  2. In the Select clusters for package deployment table, select both clusters, then click Continue.

  3. Leave Package hosted on Git selected, then click Continue.

  4. In the Package name field, enter root-sync.

Leave the Sync type as Cluster scoped sync.

  1. In the Repository URL field, enter the following url:
https://source.developers.google.com/p/{{{primary_project.project_id|PROJECT_ID}}}/r/acm-repo
  1. In the Branch field, enter main.

  2. In advanced settings, enter the following values:

Parameter Value
Authentication type Workload Identity
GCP service account email asm-reader-sa@.iam.gserviceaccount.com
Source format Hierarchy

Leave all other fields with their default values.

  1. Click Deploy Package.
Note: Upon deployment of the package, the Sync status column displays Error. This is expected behaviour, as Task 3 involves pushing configuration files to the repository.
  1. Get cluster credentials for your GKE Clusters:
touch ~/secure-gke/asm-kubeconfig && export KUBECONFIG=~/secure-gke/asm-kubeconfig gcloud container clusters get-credentials gke-cluster-1 --zone {{{primary_project.default_region|REGION}}} gcloud container clusters get-credentials gke-cluster-2 --zone {{{primary_project.default_region|REGION}}} kubectl config rename-context gke_{{{primary_project.project_id|PROJECT_ID}}}_{{{primary_project.default_region|REGION}}}_gke-cluster-1 gke-cluster-1 kubectl config rename-context gke_{{{primary_project.project_id|PROJECT_ID}}}_{{{primary_project.default_region|REGION}}}_gke-cluster-2 gke-cluster-2 kubectl config get-contexts
  1. Download the latest version of the nomos client:
gsutil cp gs://config-management-release/released/latest/linux_amd64/nomos ~/secure-gke/nomos && chmod +x ~/secure-gke/nomos export NOMOS=~/secure-gke/nomos $NOMOS version

You may see an error message stating that the cluster cannot be contacted. If so, just rerun the $NOMOS version command, as this is usually a timing issue.

The output should be similar to the following:

CURRENT CLUSTER_CONTEXT_NAME COMPONENT VERSION v1.17.2-rc.1 gke-cluster-1 config-management v1.17.2-rc.1 * gke-cluster-2 config-management v1.17.2-rc.1

Click Check my progress to verify the objective. Install Config Sync

Task 3. Deploy an app via Config Sync

In this task, you use Config Sync to deploy an application. Note that most customers have their own preferred CI/CD tools for deploying applications. GKE Config Sync is recommended and commonly used for Gitops driven configuration management for Kubernetes resources. In the following tasks, you will learn how to use Config Sync to deploy Kubernetes resources and policies.

  1. Deploy the Cymbal Bank application (it is called "Bank of Anthos" in the repository at the moment):
gcloud source repos clone acm-repo --project={{{primary_project.project_id|PROJECT_ID}}} gcloud storage cp -r gs://spls/gsp1241/acm-repo/ ~/secure-gke/ cd ~/secure-gke/acm-repo/
  1. Push the code to the main branch:
git checkout -b main git add . git status git commit -am "Cymbal Bank application deployment" git push -u origin main Note: It can take up to 2 minutes to create a namespace in a cluster. Please re-run the following service account commands if you get the namespaces not found error.
  1. Create service account for namespaces in each cluster.
  • For cluster gke-cluster-1:
gcloud container clusters get-credentials gke-cluster-1 --zone {{{primary_project.default_region|REGION}}} --project {{{primary_project.project_id|PROJECT_ID}}} kubectl create serviceaccount bank-of-anthos --namespace balance-reader kubectl create serviceaccount bank-of-anthos --namespace contacts kubectl create serviceaccount bank-of-anthos --namespace frontend kubectl create serviceaccount bank-of-anthos --namespace ledger-writer kubectl create serviceaccount bank-of-anthos --namespace transaction-history kubectl create serviceaccount bank-of-anthos --namespace userservice
  • For Cluster gke-cluster-2:
gcloud container clusters get-credentials gke-cluster-2 --zone {{{primary_project.default_region|REGION}}} --project {{{primary_project.project_id|PROJECT_ID}}} kubectl create serviceaccount bank-of-anthos --namespace balance-reader kubectl create serviceaccount bank-of-anthos --namespace contacts kubectl create serviceaccount bank-of-anthos --namespace frontend kubectl create serviceaccount bank-of-anthos --namespace ledger-writer kubectl create serviceaccount bank-of-anthos --namespace transaction-history kubectl create serviceaccount bank-of-anthos --namespace userservice
  1. In the console, go to the Config Sync Dashboard.

  2. Click the Packages tabbed page.

In the root-sync field, you should see Sync status and Reconcile status are Synced and Current.

Note: It can take up to 2-5 minutes to sync the status.
  1. Access the application by browsing to the ASM ingressgateway external IP address:
export ASM_INGRESS_IP_CLUSTER_1=$(kubectl --context=gke-cluster-1 -n asm-gateways get svc asm-ingressgateway -ojsonpath='{.status.loadBalancer.ingress[].ip}') echo -e "ASM_INGRESS_IP_CLUSTER_1 is ${ASM_INGRESS_IP_CLUSTER_1}" export ASM_INGRESS_IP_CLUSTER_2=$(kubectl --context=gke-cluster-2 -n asm-gateways get svc asm-ingressgateway -ojsonpath='{.status.loadBalancer.ingress[].ip}') echo -e "ASM_INGRESS_IP_CLUSTER_2 is ${ASM_INGRESS_IP_CLUSTER_2}"
  1. Test each of these IPs in your web browser; you should see the login screen.

Now that Config Sync is synced to a repository, it continuously reconciles the state of your clusters with the configs in the repository.

Click Check my progress to verify the objective. Deploy an app via Config Sync

Task 4. Install policy controller in the fleet

In this task, you install the Policy Controller. Policy Controller checks, audits, and enforces your clusters' compliance with policies related to security, regulations, or business rules.

  1. In the console, go to Kubernetes Engine > Policy page under the Posture Management section.

  2. Click Configure policy controller.

  3. Click Configure, then Confirm.

  4. In Settings, click Sync with fleet settings

  5. Select both clusters.

  6. Click Sync to fleet setings, then Confirm.

On the Policy Controller Settings tab, you will find that the Policy Controller is installed and configured on your clusters. This can take several minutes.

Note: The Installed status can take several minutes to display.

Click Check my progress to verify the objective. Install policy controller in the fleet

Task 5. Deploying policies in dry-run mode

While you used Config Sync to deploy a sample application to your GKE clusters, the application is running in an unsecure manner. This means that there is no encryption (or authentation) configured between the services, no NetworkPolicies between namespaces, and no authorization policies.

In this task, you deploy policies on the GKE cluster in dry-run mode to see how they could help to increase your cluster's security posture. Applying contraints in dry dryrun enables Policy Controller to report violations in the status.violations.

Configuring Strict mTLS

Strict mTLS: Achieve end-to-end encryption between all services in your cluster by requiring mesh-wide strict mTLS using ASM.

This constraint requires that ASM authentication Policy specify peers with STRICT mutual TLS.

  1. Create the constraint and commit it to the ACM repo:
mkdir ~/secure-gke/acm-repo/cluster gcloud storage cp -r gs://spls/gsp1241/cluster/constraint-gke-1-mtls-strict.yaml ~/secure-gke/acm-repo/cluster cd ~/secure-gke/acm-repo/ git add . && git commit -am "constraint strict mtls dry-run on cluster 2" git push -u origin main
  1. Navigate to the Config Dashboard and click the Package tabbed page to see the Sync status.

  2. Click on the Refresh button a few times to check for the latest sync status.

In the root-sync field, you should see the Sync status and Reconcile status are Synced and Current.

  1. Once the clusters have synchronized, inspect the violation:
kubectl config use-context gke-cluster-2 kubectl --context=gke-cluster-2 get policystrictonly policy-strict-constraint -ojsonpath='{.status.violations}' | jq

The output should be similar to the following:

[ { "enforcementAction": "dryrun", "group": "security.istio.io", "kind": "PeerAuthentication", "message": "spec.mtls.mode must be set to `STRICT`", "name": "default", "namespace": "istio-system", "version": "v1beta1" } ]

As mentioned previously, since the policy is in dry-run mode, it provides the violation in the resource status.

Configuring Destination Rule TLS

Destination Rule mTLS: Achieve enforcement of per-Service encryption by enforcing ASM Destination Rules to have STRICT mTLS configured.

This constraint prohibits disabling TLS for all hosts and host subsets in Istio DestinationRules.

  1. Update the constraint template for the new API and deploy the constraint with the new constraint template. Additionally, deploy a destination rule that disables TLS, which will be a violation of the new constraint:
gcloud storage cp gs://spls/gsp1241/cluster/constraint-gke-1-mtls-destinationrule.yaml gs://spls/gsp1241/cluster/constrainttemplate-destinationruletlsenabledbeta.yaml ~/secure-gke/acm-repo/cluster cd ~/secure-gke/acm-repo/ git add . && git commit -am "constraint destinationrule enabled dry-run on cluster 1" git push -u origin main
  1. Go to the Config Dashboard and click the Package tabbed page to view the Sync status.

  2. Click on the Refresh button a few times to check for the latest sync status.

In the root-sync field, you should see the Sync status and Reconcile status are Synced and Current.

You may see a temporary error stating that no CustomResourceDefinition is defined for the type DestinationRuleTLSEnabledBeta.constraints.gatekeeper.sh.

This error should resolve itself after a few minutes.

  1. Inspect the DestinationRule is the frontend namespace:
kubectl config use-context gke-cluster-1 kubectl --context=gke-cluster-1 -n frontend get destinationrule destrule-mtls-disable -ojsonpath={.spec} | jq

When you deployed Cymbal Bank, you deployed a DestinationRule with mTLS disabled

The output should be similar to the following:

{ "host": "frontend", "trafficPolicy": { "loadBalancer": { "simple": "LEAST_CONN" }, "tls": { "mode": "DISABLE" } } }

Note that TLS mode is set to DISABLE. This should trigger a violation of the policy you just implemented.

  1. View the violation:
kubectl --context=gke-cluster-1 get destinationruletlsenabledbeta destinationrule-mtls-enabled -ojsonpath='{.status.violations}' | jq

The output should be similar to the following:

[ { "enforcementAction": "dryrun", "group": "networking.istio.io", "kind": "DestinationRule", "message": "spec.trafficPolicy.tls.mode == DISABLE for host(s): frontend", "name": "destrule-mtls-disable", "namespace": "frontend", "version": "v1beta1" } ]

Click Check my progress to verify the objective. Deploying policies in Dry Run mode

Task 6. Resolving the policy controller violations

There are currently two policy constraints on both clusters. These are as follows:

  1. Strict mTLS: Achieve end-to-end encryption between all services in your cluster by requiring mesh wide strict mTLS using ASM.

  2. Destination Rule mTLS: Achieve enforcement of per-Service encryption by enforcing ASM Destination Rules to have STRICT mTLS configured.

In this section, you resolve all the violations.

Enabling end-to-end mTLS

  1. Inspect the violation:
kubectl --context=gke-cluster-2 get policystrictonly policy-strict-constraint -ojsonpath='{.status.violations}' | jq

The output should be similar to the following:

[ { "enforcementAction": "dryrun", "group": "security.istio.io", "kind": "PeerAuthentication", "message": "spec.mtls.mode must be set to `STRICT`", "name": "default", "namespace": "istio-system", "version": "v1beta1" } ]
  1. Resolve the violation by enabling STRICT mTLS PeerAuthentication resource in the istio-system namespace:
rm -rf ~/secure-gke/acm-repo/namespaces/asm/istio-system/peerauthentication-mtls-disable.yaml gcloud storage cp gs://spls/gsp1241/cluster/peerauthentication-mtls-strict.yaml ~/secure-gke/acm-repo/namespaces/asm/istio-system/ cd ~/secure-gke/acm-repo/ git add . && git commit -am "enable STRICT mTLS mesh wide" git push -u origin main
  1. Go to the Config Dashboard and click the Package tabbed page to see the Sync status.

  2. Click on the Refresh button a few times to check for the latest sync status.

In the root-sync field, you should see Sync status and Reconcile status are Synced and Current:

kubectl --context=gke-cluster-2 -n istio-system get peerauthentication default -ojsonpath='{.spec}' | jq

The output should be similar to the following:

{ "mtls": { "mode": "STRICT" } }

The mode will change from DISABLE to STRICT.

  1. Inspect the constraint for violations:
kubectl --context=gke-cluster-2 get policystrictonly policy-strict-constraint -ojsonpath='{.status.violations}' | jq

The output is empty. This means this constraint is no longer in violation.

Note: It can take a few minutes for changes to reflect.

Enforcing per Service encryption

  1. Inspect the violation:
kubectl --context=gke-cluster-1 get destinationruletlsenabledbeta destinationrule-mtls-enabled -ojsonpath='{.status.violations}' | jq

The output should be similar to the following:

[ { "enforcementAction": "dryrun", "group": "networking.istio.io", "kind": "DestinationRule", "message": "spec.trafficPolicy.tls.mode == DISABLE for host(s): frontend", "name": "destrule-mtls-disable", "namespace": "frontend", "version": "v1beta1" } ]
  1. Resolve the violation by enabling the STRICT mTLS DestinationRule resource in the frontend namespace:
rm -rf ~/secure-gke/acm-repo/namespaces/bank-of-anthos/frontend/destinationrule-mtls-disabled.yaml gcloud storage cp gs://spls/gsp1241/cluster/destinationrule-mtls-istio-mutual.yaml ~/secure-gke/acm-repo//namespaces/bank-of-anthos/frontend/ cd ~/secure-gke/acm-repo/ git add . && git commit -am "enable ISTIO_MUTUAL TLS for frontend Service" git push -u origin main
  1. Go to the Config Dashboard and click the Package tabbed page to see the sync status.

  2. Click on the Refresh button a few times to check for the latest sync status.

In the root-sync field, you should see the Sync status and Reconcile status are Synced and Current:

kubectl --context=gke-cluster-1 -n frontend get destinationrule destrule-mtls-istio-mutual -ojsonpath='{.spec}' | jq

The output should be similar to the following:

{ "host": "frontend", "trafficPolicy": { "loadBalancer": { "simple": "LEAST_CONN" }, "tls": { "mode": "ISTIO_MUTUAL" } } }

You may initially see an error stating that the DestinationRule destrule-mtls-istio-mutual was not found. This error will resolve once the repo syncs to the cluster.

  1. Inspect the constraint for violations:
kubectl --context=gke-cluster-1 get destinationruletlsenabledbeta destinationrule-mtls-enabled -ojsonpath='{.status.violations}' | jq

The output is empty, meaning this constraint is no longer in violation.

Note: It can take couple of minutes for changes to reflect. Resolving the policy controller violations

Congratulations!

In this lab, you learned to automate configuration with Config Sync, and enforce policies with Policy Controller.

Manual Last Updated February 07, 2025

Lab Last Tested February 07, 2025

Copyright 2024 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

Please sign in to access this content.

close

Before you begin

  1. Labs create a Google Cloud project and resources for a fixed time
  2. Labs have a time limit and no pause feature. If you end the lab, you'll have to restart from the beginning.
  3. On the top left of your screen, click Start lab to begin

This content is not currently available

We will notify you via email when it becomes available

Great!

We will contact you via email if it becomes available

One lab at a time

Confirm to end all existing labs and start this one

Use private browsing to run the lab

Use an Incognito or private browser window to run this lab. This prevents any conflicts between your personal account and the Student account, which may cause extra charges incurred to your personal account.