As a cloud architect, you know that storage is one of the most important resources in any organization. Every day, thousands of new files and documents are generated, modified, and accessed in your company. You establish a successful disaster recovery plan to store backups and create a redundant architecture, including the following:
Secure, scalable, and highly available object-level storage
Granular access control
Versioning
Lifecycle management capabilities
Direct synchronization between on-prem and cloud directories
In Amazon Web Services (AWS), you use Simple Storage Service (S3) as your object storage solution.
For granular access control, you use a combination of bucket policies, Identity and Access Management (IAM) policies, and access control lists (ACLs) to manage who has access to an entire bucket and individual objects. Inside an S3 bucket, objects can also be encrypted with AWS-managed or client-managed encryption keys, providing an extra layer of security.
You set up versioning to avoid accidental deletion and overwriting of important files. You can also optimize costs by setting up lifecycle policies that automatically move objects from one storage class to another based on access patterns.For directory synchronization, you can mirror locations to ensure changes in sources are reflected in the destination, allowing you to replicate your data.
Now you will explore the various Cloud Storage features to securely store your data on Google Cloud using both the Cloud console and the gsutil tool.
Overview
Cloud Storage is a fundamental resource in Google Cloud, with many advanced features. In this lab, you exercise many Cloud Storage features that could be useful in your designs. You explore Cloud Storage using both the console and the gsutil tool.
Objectives
In this lab, you learn how to perform the following tasks:
Create and use buckets
Set access control lists to restrict access
Use your own encryption keys
Implement version controls
Use directory synchronization
Qwiklabs setup
For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.
Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method.
On the left is the Lab Details panel with the following:
The Open Google Cloud console button
Time remaining
The temporary credentials that you must use for this lab
Other information, if needed, to step through this lab
Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).
The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
Note: If you see the Choose an account dialog, click Use Another Account.
If necessary, copy the Username below and paste it into the Sign in dialog.
{{{user_0.username | "Username"}}}
You can also find the Username in the Lab Details panel.
Click Next.
Copy the Password below and paste it into the Welcome dialog.
{{{user_0.password | "Password"}}}
You can also find the Password in the Lab Details panel.
Click Next.
Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials.
Note: Using your own Google Cloud account for this lab may incur extra charges.
Click through the subsequent pages:
Accept the terms and conditions.
Do not add recovery options or two-factor authentication (because this is a temporary account).
Do not sign up for free trials.
After a few moments, the Google Cloud console opens in this tab.
Note: To view a menu with a list of Google Cloud products and services, click the Navigation menu at the top-left, or type the service or product name in the Search field.
Task 1. Preparation
In the task, you create a Cloud Storage bucket. You then download a sample file which you will use in the next task.
Create a Cloud Storage bucket
In the Google Cloud console, in the Navigation menu (), click Cloud Storage > Buckets.
Note: a bucket must have a globally unique name. You could use part of your PROJECT_ID_1 in the name to help make it unique. For example, if the PROJECT_ID_1 is myproj-154920 your bucket name might be storecore154920
Click Create.
Specify the following, and leave the remaining settings as their defaults:
Property
Value (type value or select option as specified)
Name
Enter a globally unique name
Location type
Region
Region
Enforce public access prevention on this bucket
unchecked
Access control
Fine-grained (object-level permission in addition to your bucket-level permissions)
Make a note of the bucket name. It will be used later in this lab and referred to as [BUCKET_NAME_1].
Click Create.
Click Check my progress to verify the objective.
Create a Cloud Storage bucket
Download a sample file using CURL and make two copies
In the Cloud console, click Activate Cloud Shell ().
If prompted, click Continue.
Store [BUCKET_NAME_1] in an environment variable:
export BUCKET_NAME_1=<enter bucket name 1 here>
Verify it with echo:
echo $BUCKET_NAME_1
Run the following command to download a sample file (this sample file is a publicly available Hadoop documentation HTML file):
Copy the value of the generated key excluding b' and \n' from the command output. Key should be in form of tmxElCaabWvJqR7uXEWQF39DhWTcDvChzuCmpHe6sb0=.
Modify the boto file
The encryption controls are contained in a gsutil configuration file named .boto.
To view and open the boto file, run the following commands:
ls -al
nano .boto
Note: if the .boto file is empty, close the nano editor with Ctrl+X and generate a new .boto file using the gsutil config -n command. Then, try opening the file again with the above commands.
If the .boto file is still empty, you might have to locate it using the gsutil version -l command.
Locate the line with "#encryption_key="
Note: the bottom of the nano editor provides you with shortcuts to quickly navigate files. Use the Where Is shortcut to quickly locate the line with the #encryption_key=.
Uncomment the line by removing the # character, and paste the key you generated earlier at the end.
Click [BUCKET_NAME_1]. Both setup2.html and setup3.html files show that they are customer-encrypted.
Click Check my progress to verify the objective.
Customer-supplied encryption keys (CSEK)
Delete local files, copy new files, and verify encryption
To delete your local files, run the following command in Cloud Shell:
rm setup*
To copy the files from the bucket again, run the following command:
gsutil cp gs://$BUCKET_NAME_1/setup* ./
To cat the encrypted files to see whether they made it back, run the following commands:
cat setup.html
cat setup2.html
cat setup3.html
Task 4. Rotate CSEK keys
In this task, you rotate the CSEK used to encrypt data in Cloud Storage, ensuring continued data security.
Move the current CSEK encrypt key to decrypt key
Run the following command to open the .boto file:
nano .boto
Comment out the current encryption_key line by adding the # character to the beginning of the line.
Note: the bottom of the nano editor provides you with shortcuts to quickly navigate files. Use the Where Is shortcut to quickly locate the line with the #encryption_key=.
Uncomment decryption_key1 by removing the # character, and copy the current key from the encryption_key line to the decryption_key1 line.
Copy the value of the generated key excluding b' and \n' from the command output. Key should be in form of tmxElCaabWvJqR7uXEWQF39DhWTcDvChzuCmpHe6sb0=.
To open the boto file, run the following command:
nano .boto
Uncomment encryption and paste the new key value for encryption_key=.
Press Ctrl+O, ENTER to save the boto file, and then press Ctrl+X to exit nano.
Rewrite the key for file 1 and comment out the old decrypt key
When a file is encrypted, rewriting the file decrypts it using the decryption_key1 that you previously set, and encrypts the file with the new encryption_key.
You are rewriting the key for setup2.html, but not for setup3.html, so that you can see what happens if you don't rotate the keys properly.
Run the following command:
gsutil rewrite -k gs://$BUCKET_NAME_1/setup2.html
To open the boto file, run the following command:
nano .boto
Comment out the current decryption_key1 line by adding the # character back in.
To download setup3.html, run the following command:
gsutil cp gs://$BUCKET_NAME_1/setup3.html recover3.html
Note: What happened? setup3.html was not rewritten with the new key, so it can no longer be decrypted, and the copy will fail. You have successfully rotated the CSEK keys.
Task 5. Enable lifecycle management
In this task, you enable lifecycle management for a Cloud Storage bucket to automate the deletion of objects after a specified period.
View the current lifecycle policy for the bucket
Run the following command to view the current lifecycle policy:
gsutil lifecycle get gs://$BUCKET_NAME_1
Note: there is no lifecycle configuration. You create one in the next steps.
Create a JSON lifecycle policy file
To create a file named life.json, run the following command:
nano life.json
Paste the following value into the life.json file:
{
"rule":
[
{
"action": {"type": "Delete"},
"condition": {"age": 31}
}
]
}
Note: these instructions tell Cloud Storage to delete the object after 31 days.
Press Ctrl+O, ENTER to save the file, and then press Ctrl+X to exit nano.
Set the policy and verify
To set the policy, run the following command:
gsutil lifecycle set life.json gs://$BUCKET_NAME_1
To verify the policy, run the following command:
gsutil lifecycle get gs://$BUCKET_NAME_1
Click Check my progress to verify the objective.
Enable lifecycle management
Task 6. Enable versioning
In this task, you enable versioning for a Cloud Storage bucket to protect data from accidental deletion or modification.
View the versioning status for the bucket and enable versioning
Run the following command to view the current versioning status for the bucket:
gsutil versioning get gs://$BUCKET_NAME_1
Note: the Suspended policy means that it is not enabled.
To enable versioning, run the following command:
gsutil versioning set on gs://$BUCKET_NAME_1
To verify that versioning was enabled, run the following command:
gsutil versioning get gs://$BUCKET_NAME_1
Click Check my progress to verify the objective.
Enable versioning
Create several versions of the sample file in the bucket
Check the size of the sample file:
ls -al setup.html
Open the setup.html file:
nano setup.html
Delete any 5 lines from setup.html to change the size of the file.
Press Ctrl+O, ENTER to save the file, and then press Ctrl+X to exit nano.
Copy the file to the bucket with the -v versioning option:
To list all versions of the file, run the following command:
gcloud storage ls -a gs://$BUCKET_NAME_1/setup.html
Highlight and copy the name of the oldest version of the file (the first listed), referred to as [VERSION_NAME] in the next step.
Note: make sure to copy the full path of the file, starting with gs://
Store the version value in the environment variable [VERSION_NAME].
export VERSION_NAME=<Enter VERSION name here>
Verify it with echo:
echo $VERSION_NAME
Result (this is example output):
gs://BUCKET_NAME_1/setup.html#1584457872853517
Download the oldest, original version of the file and verify recovery
Download the original version of the file:
gcloud storage cp $VERSION_NAME recovered.txt
To verify recovery, run the following commands:
ls -al setup.html
ls -al recovered.txt
Note: you have recovered the original file from the backup version. Notice that the original is bigger than the current version because you deleted lines.
Task 7. Synchronize a directory to a bucket
In this task, you synchronize a local directory and its subdirectories with a Cloud Storage bucket using the gsutil rsync command.
Make a nested directory and sync with a bucket
Make a nested directory structure so that you can examine what happens when it is recursively copied to a bucket.
In the Google Cloud console, in the Navigation menu (), click Cloud Storage > Buckets.
Click [BUCKET_NAME_1]. Notice the subfolders in the bucket.
Click on /firstlevel and then on /secondlevel.
Compare what you see in the Cloud Console with the results of the following command:
gcloud storage ls -r gs://$BUCKET_NAME_1/firstlevel
Exit Cloud Shell:
exit
Task 8. Review
In this lab you learned to create and work with buckets and objects, and you learned about the following features for Cloud Storage:
CSEK: Customer-supplied encryption key
Use your own encryption keys
Rotate keys
ACL: Access control list
Set an ACL for private, and modify to public
Lifecycle management
Set policy to delete objects after 31 days
Versioning
Create a version and restore a previous version
Directory synchronization
Recursively synchronize a VM directory with a bucket
Summary
Both S3 and Cloud Storage provide object-level storage with access control management, encryption, versioning, and lifecycle management capabilities. Let’s take a look at some of the similarities and differences between both services.
Similarities:
Both Cloud Storage and S3 provide Access Control Lists (ACLs) for granular access control.
Both services offer storage classes to meet cost optimization by modifying data access frequency and data redundancy.
Both services offer object lifecycle management to automatically move objects from one storage class to another.
Both services offer versioning to protect files against accidental deletion and overwrites.
Both services offer directory synchronization to keep objects updated and to ensure alignment between source location and target bucket objects.
Differences:
In Google Cloud you can enable bucket encryption by modifying the gsutil configuration .boto file to include your customer provided encryption key. In AWS you can enable Bucket encryption through the AWS CLI by using the command “put-bucket-encryption” and specifying the key in the --server-side-encryption-configuration parameter.
In Google Cloud, all Storage Classes are managed by a single service. Whereas in AWS, archiving storage classes are managed separately by S3 Glacier. Standard and Infrequent Access Classes are managed by S3.
End your lab
When you have completed your lab, click End Lab. Google Cloud Skills Boost removes the resources you’ve used and cleans the account for you.
You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.
The number of stars indicates the following:
1 star = Very dissatisfied
2 stars = Dissatisfied
3 stars = Neutral
4 stars = Satisfied
5 stars = Very satisfied
You can close the dialog box if you don't want to provide feedback.
For feedback, suggestions, or corrections, please use the Support tab.
Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.
Labs create a Google Cloud project and resources for a fixed time
Labs have a time limit and no pause feature. If you end the lab, you'll have to restart from the beginning.
On the top left of your screen, click Start lab to begin
Use private browsing
Copy the provided Username and Password for the lab
Click Open console in private mode
Sign in to the Console
Sign in using your lab credentials. Using other credentials might cause errors or incur charges.
Accept the terms, and skip the recovery resource page
Don't click End lab unless you've finished the lab or want to restart it, as it will clear your work and remove the project
This content is not currently available
We will notify you via email when it becomes available
Great!
We will contact you via email if it becomes available
One lab at a time
Confirm to end all existing labs and start this one
Use private browsing to run the lab
Use an Incognito or private browser window to run this lab. This
prevents any conflicts between your personal account and the Student
account, which may cause extra charges incurred to your personal account.
In this lab you create and use Cloud Storage buckets and exercise many of the advanced features including restricting access with Access Lists, implementing version control, loading and managing your own encryption keys, directory synch and more.