arrow_back

Running a Dedicated Ethereum RPC Node in Google Cloud

Testez vos connaissances et partagez-les avec notre communauté
done
Accédez à plus de 700 ateliers pratiques, badges de compétence et cours

Running a Dedicated Ethereum RPC Node in Google Cloud

Atelier 1 heure 30 minutes universal_currency_alt 5 crédits show_chart Intermédiaire
info Cet atelier peut intégrer des outils d'IA pour vous accompagner dans votre apprentissage.
Testez vos connaissances et partagez-les avec notre communauté
done
Accédez à plus de 700 ateliers pratiques, badges de compétence et cours

GSP1116

Google Cloud self-paced labs logo

Overview

Hosting your own blockchain nodes may be required for security, compliance, performance or privacy. And a decentralized, resilient and sustainable network is a critical foundation for any blockchain protocol. Web3 developers can use Google Cloud's Blockchain Node Engine, a fully managed node-hosting solution for Web3 development. Organizations can also configure and manage their own nodes in Google Cloud. As the trusted partner for Web3 infrastructure, Google Cloud offers secure, reliable, and scalable node hosting infrastructure. To learn more about Hosting nodes on Google Cloud, visit blog post Introducing Blockchain Node Engine: fully managed node-hosting for Web3 development.

To learn more about technical considerations and architectural decisions you need to make when you deploy self-managed blockchain nodes to the cloud; please visit blog post Google Cloud for Web3.

The dedicated node hosting diagram, which includes the blockchain network and your dApp's.

In this lab, you create a virtual machine (VM) to deploy an Ethereum RPC node. An Ethereum RPC node is capable of receiving blockchain updates from the network and processing RPC API requests. You use a e2-standard-4 machine type that includes a 20-GB boot disk, 4 virtual CPUs (vCPU) and 16 GB of RAM. To ensure there is enough room for the blockchain data, you attach a 200GB SSD disk to the instance. You use Ubuntu 20.04 and deploy two services: Geth, the "execution layer" and Lighthouse, the "consensus layer". Both of these services work together to form an Ethereum RPC node.

Objectives

In this lab, you learn how to perform the following tasks:

  • Create a Compute Engine instance with a persistent disk.
  • Configure a static IP address and network firewall rules.
  • Schedule regular backups.
  • Deploy Geth, the execution layer for Ethereum.
  • Deploy Lighthouse, the consensus layer for Ethereum.
  • Make Ethereum RPC calls.
  • Configure Cloud Logging.
  • Configure Cloud Monitoring.
  • Configure uptime checks.

Setup and requirements

Before you click the Start Lab button

Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.

This hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.

To complete this lab, you need:

  • Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito or private browser window to run this lab. This prevents any conflicts between your personal account and the Student account, which may cause extra charges incurred to your personal account.
  • Time to complete the lab---remember, once you start, you cannot pause a lab.
Note: If you already have your own personal Google Cloud account or project, do not use it for this lab to avoid extra charges to your account.

How to start your lab and sign in to the Google Cloud console

  1. Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:

    • The Open Google Cloud console button
    • Time remaining
    • The temporary credentials that you must use for this lab
    • Other information, if needed, to step through this lab
  2. Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).

    The lab spins up resources, and then opens another tab that shows the Sign in page.

    Tip: Arrange the tabs in separate windows, side-by-side.

    Note: If you see the Choose an account dialog, click Use Another Account.
  3. If necessary, copy the Username below and paste it into the Sign in dialog.

    {{{user_0.username | "Username"}}}

    You can also find the Username in the Lab Details panel.

  4. Click Next.

  5. Copy the Password below and paste it into the Welcome dialog.

    {{{user_0.password | "Password"}}}

    You can also find the Password in the Lab Details panel.

  6. Click Next.

    Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials. Note: Using your own Google Cloud account for this lab may incur extra charges.
  7. Click through the subsequent pages:

    • Accept the terms and conditions.
    • Do not add recovery options or two-factor authentication (because this is a temporary account).
    • Do not sign up for free trials.

After a few moments, the Google Cloud console opens in this tab.

Note: To view a menu with a list of Google Cloud products and services, click the Navigation menu at the top-left. Navigation menu icon

In this lab, you use the following tools:

  • Ubuntu 20.04
  • Geth
  • Lighthouse
  • Curl
  • Gcloud

Task 1. Create infrastructure for the Virtual Machine

Create a public static IP address, firewall rule, service account, snapshot schedule and a virtual machine with the new IP address. This is the infrastructure that Ethereum is deployed to.

Create a public static IP address

In this section, you set up the public IP address used for the virtual machine.

  1. From the Navigation menu, under the VPC Network section, click IP Addresses.
  2. Click on RESERVE EXTERNAL STATIC IP ADDRESS in the action bar to create the static IP address.
  3. For the static address configuration, use the following:
Property Value (type or select)
Name eth-mainnet-rpc-ip
Network Service Tier Premium
IP version IPv4
Type Regional
Region
Attached to None
The static address configuration page, which includes the aforementioned fields.
  1. Click RESERVE.

Create a firewall rule

Create Firewall rules so that the VM can communicate on designated ports.

Geth P2P communicates on TCP and UDP on port 30303. Lighthouse P2P communicates on TCP and UDP on port 9000. Geth RPC uses TCP 8545.

  1. From the Navigation menu, under the VPC Network section, click Firewall.
  2. Click on CREATE FIREWALL RULE in the action bar to create the firewall rules.
  3. For the firewall configuration, use the following:
Property Value (type or select)
Name eth-rpc-node-fw
Logs Off
Network default
Priority 1000
Direction Ingress
Action on match Allow
Targets Specified target tags
Target Tags eth-rpc-node (hit enter after typing in value)
Source Filter IPv4 ranges
Source IPv4 ranges 0.0.0.0/0 (hit enter after typing in value)
Specified protocols and ports: TCP: 30303, 9000, 8545 UDP: 30303, 9000
The Create a firewall rule page, which includes the aforementioned fields
  1. Click CREATE.

Create a service account

Create a service account for the VM to operate under.

  1. From the Navigation Menu, under the IAM & Admin section, click Service Accounts.
  2. Click on CREATE SERVICE ACCOUNT in the action bar to create the service account.
  3. For the service account configuration, use the following:
Property Value (type or select)
Service account name eth-rpc-node-sa
Service account ID eth-rpc-node-sa
The Create service account page, which includes the aforementioned fields
  1. Click CREATE AND CONTINUE.
  2. Add the following roles:
Property Value (type or select)
Roles Compute OS Login, Service Controller, Logs Writer, Monitoring Metric Writer, Cloud Trace Agent, Compute Network User
The Grant this service account access to project page, with the aformentioned roles displayed.
  1. Click CONTINUE.
  2. Click DONE.

Create a snapshot schedule

In this section, you set up the snapshot schedule used for the virtual machine's attached disk, which contains the blockchain data. This will backup the chain data.

  1. From the Navigation Menu, under the Compute Engine section, click Snapshots.
  2. Click CREATE SNAPSHOT SCHEDULE to create the snapshot schedule.
  3. For the snapshot schedule, use the following:
Property Value (type or select)
Name eth-mainnet-rpc-node-disk-snapshot
Region
Regional
Schedule Options Schedule frequency: Daily, Start time (UTC): 6:00 PM - 7:00 PM, Autodelete snapshots after: 7
Deletion rule Keep snapshots
  1. Click CREATE.
The Create a snapshot schedule page, which includes the aforementioned fields

Create a Virtual Machine

In this section, you set up the virtual machine used for the Ethereum deployment.

  1. From the Navigation Menu, under the Compute Engine section, click VM Instances.

  2. Click on Create Instance to create the VM.

  3. In the Machine configuration

    Enter the value for the following field:

    Property Value (type or select)
    Name eth-mainnet-rpc-node
    Region
    Zone
    Series E2
    Machine type e2-standard-4
  4. Click OS and Storage

    Click Change and select the following values:

    Property Value(type or select)
    Operating System Ubuntu
    Version Ubuntu 20.04 LTS (x86/64)
    Boot disk type SSD
    Size 50GB

    Click SAVE.

    In Additional storage and VM backups, click Add New Disk

    Property Value(type or select)
    Name eth-mainnet-rpc-node-disk
    Disk source type Blank disk
    Disk Type SSD
    Size 200GB (Alternatively 2,000GB for larger installations)
    Snapshot schedule eth-mainnet-rpc-node-disk-snapshot
    • Click Save.
  5. Click Networking

    Property Value (type or select)
    Network tags eth-rpc-node (hit enter after typing in value - matches with firewall setting)
    • In Network Interfaces click default:

    • Under Network interface card select gVNIC

    • External IPv4 address: eth-mainnet-rpc-ip (select the static IP address created earlier)

      Click DONE.

  6. Click Security

    • Under Identity and API Access, SELECT the service account eth-rpc-node-sa.

    • Click CREATE.

Click Check my progress to verify the objective.

Create Infrastructure for the Virtual Machine

Task 2. Setup and Installation on the Virtual Machine

Now, ssh into the VM and run the commands to install the software.

SSH into the VM

  1. From the navigation menu, under the Compute Engine section, click VM Instances.
  2. On the same row as eth-mainnet-rpc-node, click SSH to open a ssh window.
  3. If prompted Allow SSH-in-browser to connect to VMs, click Authorize.

Create a Swap File on the VM

To give the processes extra RAM, you'll create a swap file. This is to increase the amount of RAM that the VM can use if it needs to.

  1. To create a 25GB swap file, execute the following command:
sudo dd if=/dev/zero of=/swapfile bs=1MiB count=25KiB

Note that the first command will take a little time to execute.

  1. Update the permissions on the swap file:
sudo chmod 0600 /swapfile
  1. Designate the file to be used as a swap partition:
sudo mkswap /swapfile
  1. Add the swap file configuration to /etc/fstab, which allows the mounted drive to be recognized upon reboot:
echo "/swapfile swap swap defaults 0 0" | sudo tee -a /etc/fstab
  1. Enable the swap file:
sudo swapon -a
  1. Confirm the swap has been recognized:
free -g

You should see a message with a line similar to this:

Output:

total used free shared buff/cache available Mem: 15 0 0 0 15 15 Swap: 24 0 24

Mount the attached disk on the VM

During the VM setup, you created an attached disk. The VM will not automatically recognize this. It needs to be formatted and "mounted" before it can be used.

  1. View the attached disk. You should see an entry for sdb with the size as 200GB:
sudo lsblk
  1. Format the attached disk:
sudo mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb
  1. Create the folder and mount the attached disk:
sudo mkdir -p /mnt/disks/chaindata-disk sudo mount -o discard,defaults /dev/sdb /mnt/disks/chaindata-disk
  1. Update the permissions for the folder so processes can read/write to it:
sudo chmod a+w /mnt/disks/chaindata-disk
  1. Retrieve the disk ID of the mounted drive to confirm that the drive was mounted:
sudo blkid /dev/sdb

You should see a message similar to the one displayed in the output box below:

Output:

/dev/sdb: UUID="7fa9c421-0054-4555-b0ca-b470a97a3d84" TYPE="ext4"
  1. Retrieve the disk ID of the mounted disk and append it to the /etc/fstab file. This file ensures that the drive will still be mounted if the VM restarts.
export DISK_UUID=$(findmnt -n -o UUID /dev/sdb) echo "UUID=$DISK_UUID /mnt/disks/chaindata-disk ext4 discard,defaults,nofail 0 2" | sudo tee -a /etc/fstab
  1. Run the df command to confirm that the disk has been mounted, formatted and the correct size has been allocated:
df -h

You should see a message with a line similar to this, which shows the new mounted volume and the size:

Output:

/dev/sdb 196G 28K 196G 1% /mnt/disks/chaindata-disk

If you need to resize the disk later, follow these instructions.

Click Check my progress to verify the objective.

Create a Swap File on the VM and Mount the Attached Disk on the VM

Create a user on the VM

Create a user to run the processes under.

  1. To create a user named ethereum, execute the following commands:
sudo useradd -m ethereum sudo usermod -aG sudo ethereum sudo usermod -aG google-sudoers ethereum
  1. Switch to the ethereum user:
sudo su ethereum
  1. Start the bash command line:
bash
  1. Change to the ethereum user's home folder:
cd ~

Install the Ethereum software

  1. Update the Operating System:
sudo apt update -y sudo apt-get update -y
  1. Install common software:
sudo apt install -y dstat jq
  1. Install the Google Cloud Ops Agent:
curl -sSO https://dl.google.com/cloudagents/add-google-cloud-ops-agent-repo.sh sudo bash add-google-cloud-ops-agent-repo.sh --also-install
  1. Remove the script file that was downloaded:
rm add-google-cloud-ops-agent-repo.sh
  1. Create folders for the logs and chaindata for the Geth and Lighthouse clients:
mkdir /mnt/disks/chaindata-disk/ethereum/ mkdir /mnt/disks/chaindata-disk/ethereum/geth mkdir /mnt/disks/chaindata-disk/ethereum/geth/chaindata mkdir /mnt/disks/chaindata-disk/ethereum/geth/logs mkdir /mnt/disks/chaindata-disk/ethereum/lighthouse mkdir /mnt/disks/chaindata-disk/ethereum/lighthouse/chaindata mkdir /mnt/disks/chaindata-disk/ethereum/lighthouse/logs
  1. Install Geth from the package manager:
sudo add-apt-repository -y ppa:ethereum/ethereum sudo apt-get -y install ethereum
  1. Confirm that Geth is available and is the latest version:
geth version

You should see a message with a line similar to this:

Output:

Geth Version: 1.14.11-stable Git Commit: ea9e62ca3db5c33aa7438ebf39c189afd53c6bf8 Architecture: amd64 Go Version: go1.23.1 Operating System: linux GOPATH= GOROOT=
  1. Download the Lighthouse client. This script will download the latest release from GitHub.
# Fetch the latest release information from GitHub API RELEASE_URL="https://api.github.com/repos/sigp/lighthouse/releases/latest" LATEST_VERSION=$(curl -s $RELEASE_URL | jq -r '.tag_name') # Download the latest release using curl DOWNLOAD_URL=$(curl -s $RELEASE_URL | jq -r '.assets[] | select(.name | endswith("x86_64-unknown-linux-gnu.tar.gz")) | .browser_download_url') curl -L "$DOWNLOAD_URL" -o "lighthouse-${LATEST_VERSION}-x86_64-unknown-linux-gnu.tar.gz"
  1. Extract the lighthouse tar file and remove:
# Extract the tar file tar -xvf "lighthouse-${LATEST_VERSION}-x86_64-unknown-linux-gnu.tar.gz" # Remove the tar file rm "lighthouse-${LATEST_VERSION}-x86_64-unknown-linux-gnu.tar.gz"
  1. Move the lighthouse binary to the /usr/bin folder and update the permissions:
sudo mv lighthouse /usr/bin
  1. Confirm that lighthouse is available and is the latest version:
lighthouse --version

You should see a message with a line similar to this, note that the version number might be different:

Output:

Lighthouse v5.3.0-d6ba8c3 BLS library: blst-portable BLS hardware acceleration: true SHA256 hardware acceleration: false Allocator: jemalloc Profile: maxperf Specs: mainnet (true), minimal (false), gnosis (true)
  1. Create the shared JWT secret. This JWT secret is used as a security mechanism that restricts who can call the execution client's RPC endpoint.
cd ~ mkdir ~/.secret openssl rand -hex 32 > ~/.secret/jwtsecret chmod 440 ~/.secret/jwtsecret

Click Check my progress to verify the objective.

Create a User on the VM and Install the Ethereum software

Task 3. Start the Ethereum Execution and Consensus Clients

Ethereum has two clients: Geth - the execution layer, and Lighthouse - the consensus layer. They run in parallel with each other and work together. Geth will then establish an authrpc endpoint and port that Lighthouse will call. This endpoint is protected by a common security token saved locally. Lighthouse connects to Geth using the execution endpoint and security token.

For information on how Geth connects to the consensus client, read the Connecting to Consensus Clients documentation. For more information on how lighthouse connects to the execution client, take a look at the Merge Migration - Lighthouse Book documentation.

Start Geth

The following starts the Geth execution client.

  1. First, authenticate in gcloud. Inside the SSH session, run:
gcloud auth login

Press ENTER when you see the prompt Do you want to continue (Y/n)?

  1. Navigate to the link displayed in a new tab.

  2. Click on your active username (), and click Allow.

  3. When you see the prompt Enter the following verification code in gcloud CLI on the machine you want to log into, click on the copy button then go back to the SSH session, and paste the code into the prompt Enter authorization code:.

  4. Set the external IP address environment variable:

export CHAIN=eth export NETWORK=mainnet export EXT_IP_ADDRESS_NAME=$CHAIN-$NETWORK-rpc-ip export EXT_IP_ADDRESS=$(gcloud compute addresses list --filter=$EXT_IP_ADDRESS_NAME --format="value(address_range())")
  1. Run the following command to start Geth as a background process. In this lab, you use the "snap" sync mode, which is a light node. To sync as a full node, use "full" as the sync mode. You can run this at the command line or save this to a .sh file first and then run it. You can also configure it to run as a service with systemd.
nohup geth --datadir "/mnt/disks/chaindata-disk/ethereum/geth/chaindata" \ --http.corsdomain "*" \ --http \ --http.addr 0.0.0.0 \ --http.port 8545 \ --http.corsdomain "*" \ --http.api admin,debug,web3,eth,txpool,net \ --http.vhosts "*" \ --gcmode full \ --cache 2048 \ --mainnet \ --metrics \ --metrics.addr 127.0.0.1 \ --syncmode snap \ --authrpc.vhosts="localhost" \ --authrpc.port 8551 \ --authrpc.jwtsecret=/home/ethereum/.secret/jwtsecret \ --txpool.accountslots 32 \ --txpool.globalslots 8192 \ --txpool.accountqueue 128 \ --txpool.globalqueue 2048 \ --nat extip:$EXT_IP_ADDRESS \ &> "/mnt/disks/chaindata-disk/ethereum/geth/logs/geth.log" &

Click Check my progress to verify the objective.

Start Geth as a background process and use the snap sync mode
  1. To see the process id, run this command:
ps -A | grep geth
  1. Check the logs to see if the process started correctly:
tail -f /mnt/disks/chaindata-disk/ethereum/geth/logs/geth.log

You should see a message similar to the one displayed in the output box below. The Geth client won't continue until it pairs with a consensus client.

Output:

Looking for peers peercount=1 tried=27 static=0 Post-merge network, but no beacon client seen. Please launch one to follow the chain!
  1. Enter Ctrl+C to break out of the log monitoring.

Start Lighthouse

Now, you'll start the lighthouse consensus client.

  1. Run the following command to launch lighthouse as a background process. You can run this at the command line or save this to a .sh file first and then run it. You can also configure it to run as a service with systemd.
nohup lighthouse bn \ --network mainnet \ --http \ --metrics \ --datadir /mnt/disks/chaindata-disk/ethereum/lighthouse/chaindata \ --execution-jwt /home/ethereum/.secret/jwtsecret \ --execution-endpoint http://localhost:8551 \ --checkpoint-sync-url https://sync-mainnet.beaconcha.in \ --disable-deposit-contract-sync \ &> "/mnt/disks/chaindata-disk/ethereum/lighthouse/logs/lighthouse.log" &
  1. To see the process id, run the following command:
ps -A | grep lighthouse
  1. Check the log file to see if the process started correctly. This may take a few minutes to show up:
tail -f /mnt/disks/chaindata-disk/ethereum/lighthouse/logs/lighthouse.log

You should see a message similar to the following:

Output:

INFO Syncing INFO Synced INFO New block received
  1. Enter Ctrl+C to break out of the log monitoring.

  2. Check the Geth log again to confirm that the logs are being generated correctly.

tail -f /mnt/disks/chaindata-disk/ethereum/geth/logs/geth.log

You should see a message similar to the one displayed in the output box below.

Output:

Syncing beacon headers

Verify node has been synced with the blockchain

Determine if the node is still syncing. It will take some time for the node to sync. (Note that you don't need to wait for the node to sync to complete the lab) There are two ways to find out the sync status: Geth and an RPC call.

  1. Run the following Geth command to check if the node is still syncing. Output of "false" means that it is synced with the network.
geth attach /mnt/disks/chaindata-disk/ethereum/geth/chaindata/geth.ipc
  1. At the Geth console execute:
eth.syncing

You should see something similar to the following:

Output:

#If not synced: { currentBlock: 5186007, healedBytecodeBytes: 0, healedBytecodes: 0, healedTrienodeBytes: 0, healedTrienodes: 0, healingBytecode: 0, healingTrienodes: 0, highestBlock: 16193909, startingBlock: 0, syncedAccountBytes: 2338698797, syncedAccounts: 9417189, syncedBytecodeBytes: 302598044, syncedBytecodes: 58012, syncedStorage: 42832820, syncedStorageBytes: 9263550660 } #If synced: false
  1. Type exit to exit out of the Geth console.
  2. Run the following curl command to check if the node is still syncing. The command line tool ‘jq' will format the json output of the curl command. Output of "false" means that it is synced with the network.
curl -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"eth_syncing","id":67}' http://localhost:8545 | jq

Output:

#If not synced: { "jsonrpc": "2.0", "id": 67, "result": { "currentBlock": "0x4d70e9", "healedBytecodeBytes": "0x0", "healedBytecodes": "0x0", "healedTrienodeBytes": "0x0", "healedTrienodes": "0x0", "healingBytecode": "0x0", "healingTrienodes": "0x0", "highestBlock": "0xf71975", "startingBlock": "0x0", "syncedAccountBytes": "0x8b65b62d", "syncedAccounts": "0x8fb1e5", "syncedBytecodeBytes": "0x1209479c", "syncedBytecodes": "0xe29c", "syncedStorage": "0x28d93b4", "syncedStorageBytes": "0x2282690c4" } } #If synced: {"jsonrpc":"2.0","id":67,"result":false}
  1. Run the following curl command to check if the node is accessible through the external IP address:
curl -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"eth_syncing","id":67}' http://$EXT_IP_ADDRESS:8545 | jq

Output:

#If not synced: { "jsonrpc": "2.0", "id": 67, "result": { "currentBlock": "0x4d70e9", "healedBytecodeBytes": "0x0", "healedBytecodes": "0x0", "healedTrienodeBytes": "0x0", "healedTrienodes": "0x0", "healingBytecode": "0x0", "healingTrienodes": "0x0", "highestBlock": "0xf71975", "startingBlock": "0x0", "syncedAccountBytes": "0x8b65b62d", "syncedAccounts": "0x8fb1e5", "syncedBytecodeBytes": "0x1209479c", "syncedBytecodes": "0xe29c", "syncedStorage": "0x28d93b4", "syncedStorageBytes": "0x2282690c4" } } #If synced: {"jsonrpc":"2.0","id":67,"result":false}

Task 4. Configure Cloud operations

Google Cloud has several operation services to manage your Ethereum node. This section walks through configuring Cloud Logging, Managed Prometheus, Cloud Monitoring and Cloud Alerts.

Configure Cloud logging

By default, Geth and Lighthouse will be logging to their declared log file. You'll want to bring the log data into Cloud Logging. Cloud Logging has powerful search capabilities and alerts can be created for specific log messages.

  1. Update permissions of the Cloud Ops config file so you can update it:
sudo chmod 666 /etc/google-cloud-ops-agent/config.yaml
  1. Configure Cloud Ops agent to send log data to Cloud Logging. Update the file "/etc/google-cloud-ops-agent/config.yaml" to include this Ops Agent configuration. This config file defines the Geth and Lighthouse log files for import into Cloud Logging:
sudo cat << EOF >> /etc/google-cloud-ops-agent/config.yaml logging: receivers: syslog: type: files include_paths: - /var/log/messages - /var/log/syslog ethGethLog: type: files include_paths: ["/mnt/disks/chaindata-disk/ethereum/geth/logs/geth.log"] record_log_file_path: true ethLighthouseLog: type: files include_paths: ["/mnt/disks/chaindata-disk/ethereum/lighthouse/logs/lighthouse.log"] record_log_file_path: true journalLog: type: systemd_journald service: pipelines: logging_pipeline: receivers: - syslog - journalLog - ethGethLog - ethLighthouseLog EOF
  1. After saving, run these commands to restart the agent and pick up the changes:
sudo systemctl stop google-cloud-ops-agent sudo systemctl start google-cloud-ops-agent sudo systemctl status google-cloud-ops-agent
  1. Enter Ctrl+C to exit out of the status screen.
  2. If there is an error in the status, use this command to see more details:
sudo journalctl -xe | grep "google_cloud_ops_agent_engine"
  1. Check Cloud logging to confirm that log messages are appearing in the console. From the Navigation Menu, under the Logging section, click Logs Explorer. You should see messages similar to these:
The Query results page, which lists several messages and their previews.

Configure Managed Prometheus

Since we started the geth and lighthouse clients with the --metrics flag, both clients will output metrics on http port. These metrics can be stored in a time series database like Prometheus and used to supply data to insightful grafana dashboards. Normally you would need to install Prometheus on the VM, but a small configuration in the Cloud Ops agent can capture the metrics and store them in the Managed Prometheus service in Google Cloud.

  1. On the command line of the VM, confirm the Geth metrics endpoint is active.
curl http://localhost:6060/debug/metrics/prometheus

Output:

...... # TYPE vflux_server_clientEvent_deactivated gauge vflux_server_clientEvent_deactivated 0 # TYPE vflux_server_clientEvent_disconnected gauge vflux_server_clientEvent_disconnected 0 # TYPE vflux_server_inactive_count gauge vflux_server_inactive_count 0
  1. On the command line of the VM, confirm the Lighthouse metrics endpoint is active.
curl http://localhost:5054/metrics

Output:

...... gossipsub_heartbeat_duration_bucket{le="300.0"} 5679573 gossipsub_heartbeat_duration_bucket{le="350.0"} 5679573 gossipsub_heartbeat_duration_bucket{le="400.0"} 5679573 gossipsub_heartbeat_duration_bucket{le="450.0"} 5679573 gossipsub_heartbeat_duration_bucket{le="+Inf"} 5679573 ......
  1. Configure Cloud Ops agent to send the metrics data to Managed Prometheus. Update the file "/etc/google-cloud-ops-agent/config.yaml" to include this Ops Agent configuration. This config file defines the Geth and Lighthouse metrics endpoint for import into Managed Prometheus:
sudo cat << EOF >> /etc/google-cloud-ops-agent/config.yaml metrics: receivers: prometheus: type: prometheus config: scrape_configs: - job_name: 'geth_exporter' scrape_interval: 10s metrics_path: /debug/metrics/prometheus static_configs: - targets: ['localhost:6060'] - job_name: 'lighthouse_exporter' scrape_interval: 10s metrics_path: /metrics static_configs: - targets: ['localhost:5054'] service: pipelines: prometheus_pipeline: receivers: - prometheus EOF
  1. After saving, run these commands to restart the agent and pick up the changes:
sudo systemctl stop google-cloud-ops-agent sudo systemctl start google-cloud-ops-agent sudo systemctl status google-cloud-ops-agent
  1. Enter Ctrl+C to exit out of the status screen.
  2. If there is an error in the status, use this command to see more details:
sudo journalctl -xe | grep "google_cloud_ops_agent_engine"
  1. Check Cloud logging to confirm that the metrics are appearing in the console. From the Navigation Menu, under the Monitoring section, click Metrics Explorer. Select the <> PromQL option. In the query box, enter a lighthouse metric gossipsub_heartbeat_duration_bucket. Click RUN QUERY. You should see results similar to this:
The PROMQL query results page, which lists the requested data metrics and their values.

You can do the same for a Geth metric (example rpc_duration_eth_blockNumber_success_count) to confirm that Geth metrics are shown.

View Cloud monitoring

Cloud monitoring should already be active for your virtual machine.

  1. From the Navigation Menu, under the Compute Engine section, click VM Instances.
  2. Click on the VM eth-mainnet-rpc-node.
  3. Click on the tab OBSERVABILITY.
  4. All sections should be showing a graph of different metrics from the VM.
The Observability tabbed page, which includes several graphs for the metrics, such as Network Traffic and CPU Utilization.
  1. Click around the different sub-menus and timeframes to check out the different types of metrics that are captured directly from the VM.

Configure notification channel

Configure a notification channel that alerts will be sent to:

  1. From the Navigation Menu, under the Monitoring section, click Alerting.
  2. Click EDIT NOTIFICATION CHANNELS.
  3. Under Email, click ADD NEW.
  4. Type in Email address and Display Name of the person who should receive the notifications.

Configure metrics alerts

Configure alerts based on VM metrics:

  1. From the Navigation Menu, under the Monitoring section, click Alerting.
  2. Click CREATE POLICY.
  3. Click SELECT A METRIC.
  4. Click VM Instance > Disk > Disk Utilization and click Apply.
  5. Add filters:
Property Value (type or select)
device /dev/sdb
state used
The Create Policy page, which includes the aforementioned fields.
  1. Click NEXT.
  2. Enter the Threshold value: 90%
The Configure alert trigger page, which includes the aforementioned field.
  1. Click NEXT, select the following values:
Property Value
(type or select)
Use notification channel select
Notify on incident closure check
Incident autoclose duration 2 days
Documentation Check the disk space of the VM
Name VM - Disk space alert - 90% utilization
The Configure notifications and finalize alert page, which includes the aforementioned fields. The documention and Name the alert policy text fields and the Next button.
  1. Click NEXT.
  2. Click CREATE POLICY.

Configure Uptime checks

Configure uptime checks for the HTTP endpoint:

  1. From the Navigation Menu, under the Monitoring section, click Uptime checks.
  2. Click CREATE UPTIME CHECK.
  3. Configure the uptime check with the following values:
Property Value
(type or select)
Protocol HTTP
Resource Type Instance
Applies to single: Instance eth-mainnet-rpc-node
Path /
Expand More target options
Request Method GET
Port 8545
Click Continue accept defaults
Click Continue accept defaults
Choose notification channel Select notification channel created previously
Title eth-mainnet-rpc-node-uptime-check
  1. Click TEST (should show success of 200 OK).
  2. Click CREATE.

Click Check my progress to verify the objective.

Configure Cloud Operations

Congratulations!

In this lab, you created a compute engine instance with a persistent disk, configured a static IP address, configured network firewall rules, scheduled backups, deployed Geth and Lighthouse Ethereum clients, tested the setup with Ethereum RPC calls, configured cloud logging and monitoring and configured uptime checks.

Next steps / learn more

Check out these resources to continue your Ethereum journey:

  • To learn more about Ethereum client (execution layer), refer to Geth.
  • To learn more about Ethereum client (consensus layer), refer to Lighthouse.
  • To learn more about Ethereum in general, refer to Ethereum.
  • To learn more about Google Cloud for Web3, refer to the Google Cloud for Web3 website.
  • To learn more about Blockchain Node Engine for Google Cloud, refer to the Blockchain Node Engine page.

Google Cloud training and certification

...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.

Manual Last Updated December 26, 2024

Lab Last Tested December 26, 2024

Copyright 2025 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

Ce contenu n'est pas disponible pour le moment

Nous vous préviendrons par e-mail lorsqu'il sera disponible

Parfait !

Nous vous contacterons par e-mail s'il devient disponible