![](https://cdn.qwiklabs.com/assets/labs/start_lab-f45aca49782d4033c3ff688160387ac98c66941d.png)
Before you begin
- Labs create a Google Cloud project and resources for a fixed time
- Labs have a time limit and no pause feature. If you restart it, you'll have to start from the beginning.
- On the top left of your screen, click Start lab to begin
Prepare the source database for migration.
/ 20
Create a Database Migration Service connection profile.
/ 20
Create and start a continuous migration job.
/ 20
Confirm the data in Cloud SQL for PostgreSQL.
/ 20
Promote Cloud SQL to be a stand-alone instance for reading and writing data.
/ 20
Database Migration Service provides options for one-time and continuous jobs to migrate data to Cloud SQL using different connectivity options, including IP allowlists, VPC peering, and reverse SSH tunnels (see documentation on connectivity options at https://cloud.google.com/database-migration/docs/postgresql/configure-connectivity).
In this lab, you migrate a stand-alone PostgreSQL database (running on a virtual machine) to Cloud SQL for PostgreSQL using a continuous Database Migration Service job and VPC peering for connectivity.
Migrating a database via Database Migration Service requires some preparation of the source database, including creating a dedicated user with replication rights, adding the pglogical
database extension to the source database and granting rights to the schemata and tables in the database to be migrated, as well as the postgres database, to that user.
After you create and run the migration job, you confirm that an initial copy of your database has been successfully migrated to your Cloud SQL for PostgreSQL instance. You also explore how continuous migration jobs apply data updates from your source database to your Cloud SQL instance. To conclude the migration job, you promote the Cloud SQL instance to be a stand-alone database for reading and writing data.
Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources are made available to you.
This hands-on lab lets you do the lab activities in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
Click the Start Lab button. If you need to pay for the lab, a dialog opens for you to select your payment method. On the left is the Lab Details pane with the following:
Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).
The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
If necessary, copy the Username below and paste it into the Sign in dialog.
You can also find the Username in the Lab Details pane.
Click Next.
Copy the Password below and paste it into the Welcome dialog.
You can also find the Password in the Lab Details pane.
Click Next.
Click through the subsequent pages:
After a few moments, the Google Cloud console opens in this tab.
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
Click Activate Cloud Shell at the top of the Google Cloud console.
Click through the following windows:
When you are connected, you are already authenticated, and the project is set to your Project_ID,
gcloud
is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
Output:
Output:
gcloud
, in Google Cloud, refer to the gcloud CLI overview guide.
This page will either show status information or give you the option to enable the API.
The Service Networking API is required in order to be able to configure Cloud SQL to support VPC Peering and connections over a private ip-address.
This page will either show status information or give you the option to enable the API.
In this task you will add supporting features to the source database which are required in order for Database Migration Service to perform a migration. These are:
pglogical
database extension to the postgres
, orders
and gmemegen_db
databases on the stand-alone server.migration_admin
user (with Replication permissions) for database migration and granting the required permissions to schemata and relations to that user.In this step you will download and add the pglogical
database extension to the orders and postgres databases on the postgresql-vm
VM Instance.
In the Google Cloud console, on the Navigation menu (), click Compute Engine > VM instances.
In the entry for postgresql-vm
, under Connect
click SSH.
If prompted, click Authorize.
In the terminal in the new browser window, install the pglogical
database extension:
pglogical
is a logical replication system implemented entirely as a PostgreSQL extension. Fully integrated, it requires no triggers or external programs. This alternative to physical replication is a highly efficient method of replicating data using a publish/subscribe model for selective replication. Read more here: https://github.com/2ndQuadrant/pglogical
In pg_hba.conf
these commands added a rule to allow access to all hosts:
In postgresql.conf
, these commands set the minimal configuration for pglogical to configure it to listen on all addresses:
The above code snippets were appended to the relevant files and the PostgreSQL service restarted.
pglogical
database extension to the postgres
, orders
and gmemegen_db
databases.Here you can see, besides the default postgresql databases, the orders
and gmemegen_db
databases provided for this lab. You will not use the gmemegen_db
database in this lab, but will include it in the migration for use in a later lab.
In this step you will create a dedicated user for managing database migration.
In this step you will assign the necessary permissions to the migration_admin
user to enable Database Migration Service to migrate your database.
pglogical
schema and tables for the postgres
database.pglogical
schema and tables for the orders
database.public
schema and tables for the orders
database.pglogical
schema and tables for the gmemegen_db
database.public
schema and tables for the gmemegen_db
database.The source databases are now prepared for migration. The permissions you have granted to the migration_admin
user are all that is required for Database Migration Service to migrate the postgres
, orders
and gmemegen_db
databases.
Make the migration_admin
user the owner of the tables in the orders
database, so that you can edit the source data later, when you test the migration.
Click Check my progress to verify the objective.
In this task, you will create a connection profile for the PostgreSQL source instance.
In this step, you identify the internal IP address of the source database instance that you will migrate to Cloud SQL.
In the Google Cloud Console, on the Navigation menu (), click Compute Engine > VM instances.
Locate the line with the instance called postgresql-vm.
Copy the value for Internal IP (e.g., 10.128.0.2).
A connection profile stores information about the source database instance (e.g., stand-alone PostgreSQL) and is used by the Database Migration Service to migrate data from the source to your destination Cloud SQL database instance. After you create a connection profile, it can be reused across migration jobs.
In this step you will create a new connection profile for the PostgreSQL source instance.
In the Google Cloud Console, on the Navigation menu (), click VIEW ALL PRODUCTS under Databases section click on Database Migration > Connection profiles.
Click + Create Profile.
For Profile Role, select Source.
For Database engine, select PostgreSQL.
For Connection profile name, enter postgres-vm.
For Region select
Under Define connection configurations click on DEFINE
For Hostname or IP address, enter the internal IP for the PostgreSQL source instance that you copied in the previous task (e.g., 10.128.0.2)
For Port, enter 5432.
For Username, enter migration_admin.
For Password, enter DMS_1s_cool! .
For all other values leave the defaults.
Click Create.
A new connection profile named postgres-vm will appear in the Connections profile list.
Click Check my progress to verify the objective.
When you create a new migration job, you first define the source database instance using a previously created connection profile. Then you create a new destination database instance and configure connectivity between the source and destination instances.
In this task, you use the migration job interface to create a new Cloud SQL for PostgreSQL database instance and set it as the destination for the continuous migration job from the PostgreSQL source instance.
In this step you will create a new continuous migration job.
In the Google Cloud Console, on the Navigation menu (), click VIEW ALL PRODUCTS under Databases section click on Database Migration > Migration jobs.
Click + Create Migration Job.
For Migration job name, enter vm-to-cloudsql.
For Source database engine, select PostgreSQL.
For Destination region, select
For Destination database engine, select Cloud SQL for PostgreSQL.
For Migration job type, select Continuous.
Leave the defaults for the other settings.
In this step, you will define the source instance for the migration.
Leave the defaults for the other settings.
In this step, you will create the destination instance for the migration.
For Destination Instance ID, enter postgresql-cloudsql.
For Password, enter supersecret!.
For Choose a Cloud SQL edition, select Enterprise edition.
For Database version, select Cloud SQL for PostgreSQL 13.
In Choose region and zone section, select Single zone and select
For Instance connectivity, select Private IP and Public IP.
Select Use an automatically allocated IP range.
Leave the defaults for the other settings.
Note: This step may take a few minutes. If asked to retry the request, click the Retry button to refresh the Service Networking API.
When this step is complete, an updated message notifies you that the instance will use the existing managed service connection.
You will need to edit the pg_hba.conf file on the VM instance to allow access to the IP range that is automatically generated in point 5 of the previous step. You will do this in a later step before testing the migration configuration at the end of this task.
Enter the additional information needed to create the destination instance on Cloud SQL.
For Machine shapes. check 1 vCPU, 3.75 GB
For Storage type, select SSD
For Storage capacity, select 10 GB
Click Create & Continue.
If prompted to confirm, click Create Destination & Continue. A message will state that your destination database instance is being created. Continue to the next step while you wait.
In this step, you will define the connectivity method for the migration.
For Connectivity method, select VPC peering.
For VPC, select default.
VPC peering is configured by Database Migration Service using the information provided for the VPC network (the default network in this example).
When you see an updated message that the destination instance was created, proceed to the next step.
In this step you will edit the pg_hba.conf
PostgreSQL configuration file to allow the Database Migration Service to access the stand-alone PostgreSQL database.
Get the allocated IP address range. In the Google Cloud Console on the Navigation menu (), right-click VPC network > VPC network peering and open it in a new tab.
Click on the servicenetworking-googleapis-com
entry and then click on Effective Routes View at the bottom.
From the dropdown for Network select default and for Region select
In the Destination IP range column ,copy the IP range
(e.g. 10.107.176.0/24) next to peering-route-xxxxx... route.
In the Terminal session on the VM instance, edit the pg_hba.conf
file as follows:
replace the "all IP addresses" range (0.0.0.0/0) with the range copied in point 3 above.
Save and exit the nano editor with Ctrl-O, Enter, Ctrl-X
Restart the PostgreSQL service to make the changes take effect. In the VM instance Terminal session:
In this step, you will test and start the migration job.
In the Database Migration Service tab you open earlier, review the details of the migration job.
Click Test Job.
After a successful test, click Create & Start Job.
If prompted to confirm, click Create & Start.
In this step, you will confirm that the continuous migration job is running.
In the Google Cloud Console, on the Navigation menu (), click Database Migration > Migration jobs.
Click the migration job vm-to-cloudsql to see the details page.
Review the migration job status.
When the job status changes to Running CDC in progress, proceed to the next task.
Click Check my progress to verify the objective.
In the Google Cloud Console, on the Navigation menu (), click SQL.
Expand the instance ID called postgresql-cloudsql-master.
Click on the instance postgresql-cloudsql (PostgreSQL read replica).
In the Replica Instance menu, click Databases.
Notice that the databases called postgres, orders and gmemegen_db have been migrated to Cloud SQL.
In the Replica Instance menu, click Overview.
Scroll down to the Connect to this instance section and click Open Cloud Shell.
The command to connect to PostgreSQL will pre-populate in Cloud Shell:
If prompted, click Authorize for the API.
You have now activated the PostgreSQL interactive console for the destination instance.
distribution_centers
table:(Output)
You have now activated the PostgreSQL interactive console for the destination instance.
distribution_centers
table:(Output)
Note that the new row added on the stand-alone orders
database, is now present on the migrated database.
Click Check my progress to verify the objective.
In the Google Cloud Console, on the Navigation menu (), click VIEW ALL PRODUCTS under Databases section click on Database Migration > Migration jobs.
Click the migration job name vm-to-cloudsql to see the details page.
Click Promote.
If prompted to confirm, click Promote.
When the promotion is complete, the status of the job will update to Completed.
Note that postgresql-cloudsql is now a stand-alone instance for reading and writing data.
Click Check my progress to verify the objective.
You have learned how to configure a continuous Database Migration Service job to migrate databases from a PostgreSQL instance to Cloud SQL for PostgreSQL.
...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.
Manual Last Updated January 28, 2024
Lab Last Tested January 28, 2024
Copyright 2025 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.