Article Categories
- All Categories
-
Data Structure
-
Networking
-
RDBMS
-
Operating System
-
Java
-
MS Excel
-
iOS
-
HTML
-
CSS
-
Android
-
Python
-
C Programming
-
C++
-
C#
-
MongoDB
-
MySQL
-
Javascript
-
PHP
-
Economics & Finance
How to Install and Configure Cluster with Two Nodes in Linux?
In today's era, the use of clusters has become increasingly important in the field of computing. A cluster is a group of interconnected computers that work together as a single entity to provide high-performance computing, data analysis, and fault tolerance. In this tutorial, we will demonstrate how to install and configure a two-node cluster in Linux using Pacemaker and Corosync.
A cluster consists of two or more nodes that work together as a single system. Each node is a separate computer with its own resources (CPU, memory, storage) connected through a network for communication and resource sharing. Pacemaker acts as the cluster resource manager that manages service availability, while Corosync provides the cluster messaging system for inter-node communication.
Prerequisites
Two Linux servers (CentOS 7/8 or RHEL recommended)
Static IP addresses configured on both nodes
Proper hostname resolution between nodes
Root or sudo access on both systems
Firewall configured to allow cluster traffic
Cluster Architecture Overview
Step 1 Install Required Packages
Install Pacemaker and Corosync on both nodes:
sudo yum install -y pacemaker corosync pcs fence-agents-all
Enable and start the pcsd service for cluster management:
sudo systemctl enable pcsd sudo systemctl start pcsd
Step 2 Configure Firewall
Open necessary ports for cluster communication:
sudo firewall-cmd --permanent --add-service=high-availability sudo firewall-cmd --permanent --add-port=2224/tcp sudo firewall-cmd --permanent --add-port=5405/udp sudo firewall-cmd --reload
Step 3 Set Up Cluster Authentication
Set a password for the hacluster user on both nodes:
sudo passwd hacluster
From Node 1, authenticate with both nodes:
sudo pcs host auth node1 node2 -u hacluster -p password
Step 4 Create and Configure the Cluster
Initialize the cluster from Node 1:
sudo pcs cluster setup mycluster node1 node2 sudo pcs cluster enable --all sudo pcs cluster start --all
Verify cluster status:
sudo pcs cluster status
Cluster Status: Cluster Summary: * Stack: corosync * Current DC: node1 (version 2.0.5-9.el8_4.3-ba59be7122) * Last updated: Mon Jan 01 12:00:00 2024 * 2 nodes configured * 0 resource instances configured
Step 5 Disable STONITH and Configure Quorum
For a two-node cluster, disable STONITH and configure quorum policy:
sudo pcs property set stonith-enabled=false sudo pcs property set no-quorum-policy=ignore
Step 6 Create Cluster Resources
Virtual IP Resource
sudo pcs resource create virtual_ip ocf:heartbeat:IPaddr2 \
ip=192.168.1.100 cidr_netmask=24 \
op monitor interval=30s
Apache Web Server Resource
sudo pcs resource create webserver ocf:heartbeat:apache \
configfile="/etc/httpd/conf/httpd.conf" \
statusurl="http://localhost/server-status" \
op monitor interval=1min
Group Resources Together
sudo pcs resource group add web_group virtual_ip webserver
Step 7 Verify Cluster Configuration
Check cluster status and resource configuration:
sudo pcs status
Cluster name: mycluster
Cluster Summary:
* Stack: corosync
* Current DC: node1
* 2 nodes configured
* 2 resource instances configured
Node List:
* Online: [ node1 node2 ]
Full List of Resources:
* Resource Group: web_group
* virtual_ip (ocf::heartbeat:IPaddr2): Started node1
* webserver (ocf::heartbeat:apache): Started node1
Step 8 Test Failover
Simulate node failure by putting a node in standby mode:
sudo pcs node standby node1
Verify resources migrate to the other node:
sudo pcs status
Bring the node back online:
sudo pcs node unstandby node1
Network Topology Considerations
| Topology | Description | Pros | Cons |
|---|---|---|---|
| Direct Connection | Nodes connected directly via dedicated network | Low latency, simple setup | Single point of failure |
| Switch-based | Nodes connected through managed switch | Reliable, expandable | Switch becomes critical component |
| Redundant Network | Multiple network paths between nodes | High availability, fault tolerant | Complex configuration |
Common Troubleshooting Commands
# Check cluster configuration sudo pcs config # View cluster logs sudo journalctl -u corosync sudo journalctl -u pacemaker # Reset cluster configuration sudo pcs cluster destroy
Conclusion
Setting up a two-node cluster in Linux using Pacemaker and Corosync provides high availability for critical services. The cluster automatically manages resource failover between nodes, ensuring service continuity. Proper testing, monitoring, and maintenance are essential for production deployments to ensure reliable cluster operation.
