Install
Linux High-Availability Cluster tool "Pacemaker".
This example shows to configure on the environment like
follows.
(1) www01.server.world ( eth0 [192.168.1.60], eth1 [10.0.0.60] )
(2) www02.server.world ( eth0 [192.168.1.61], eth1 [10.0.0.61] )
This example uses eth0 for inter-connection and uses eth1
for service provider.
[1]
Install Pacemaker on both Hosts.
[root@www01 ~]# yum -y install pacemaker
[2] Create an authkeys that is
used for inter-connection. Configure it on both Hosts.
[root@www01 ~]# vim /etc/ha.d/authkeys
auth 1
1 sha1 secret
1 sha1 secret
[root@www01 ~]# Chmod 600 /etc/ha.d/authkeys
[3] Configure Corosync on both
Hosts.
[root@www01 ~]# vim /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf
[root@www01 ~]# vim /etc/corosync/corosync.conf
compatibility: whitetank
#
add like follows.
aisexec {
user: root
group: root
}
service {
name: pacemaker
ver: 0
use_mgmtd: yes
}
totem {
version: 2
secauth: off
threads: 0
interface {
ringnumber: 0
# Specify network address for inter-connection
bindnetaddr: 192.168.1.0
mcastaddr: 226.94.1.1
mcastport: 5405
}
}
logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}
amf {
mode: disabled
}
[root@www01 ~]# chown -R hacluster. /var/log/cluster
[root@www01 ~]# /etc/rc.d/init.d/corosync start
Starting Corosync Cluster Engine (corosync): [ OK ]
[root@www01 ~]# chkconfig corosync on
[4] Execute crm_mon on a Host,
then it's OK if the result like follows is shown. Basic settings is done for
Pacemaker. It's necessarry to configure more if you'd like to configure a
service as Cluster. See next Step.
[root@www01 ~]# crm_mon
============
Last updated: Fri Jul 15 20:56:49 2011
Stack: openais
Current DC: www01.server.world - partition with quorum
Version: 1.1.2-f059ec7ced7a86f18e5490b67ebf4a0b963bccfe
2 Nodes configured, 2 expected votes
0 Resources configured.
============
Online: [ www01.server.world www02.server.world ]
[root@www01 ~]# crm configure property
no-quorum-policy="ignore" stonith-enabled="false"
[root@www01 ~]# crm configure rsc_defaults resource-stickiness="INFINITY" migration-threshold="1"
[5] If you'd like to clear all
settings of pacemaker, Do like follows.
[root@www01 ~]# /etc/rc.d/init.d/corosync stop # stop
[root@www01 ~]# rm -f /var/lib/heartbeat/crm/* # remove all
[root@www01 ~]# /etc/rc.d/init.d/corosync start # start
Pacemaker
- Set Virtual IP Address
This example shows to configure on the environment like
follows.
(1) www01.server.world ( eth0 [192.168.1.60], eth1 [10.0.0.60] )
(2) www02.server.world ( eth0 [192.168.1.61], eth1 [10.0.0.61] )
This example uses eth0 for inter-connection and uses eth1
for service provider. And set [10.0.0.100] for Virtual IP Address as an
example.
[1] Configure Virtual IP Address on a Host.
www01.server.world
[root@www01 ~]# crm configure
crm(live)configure# primitive vip ocf:heartbeat:IPaddr2 \
> params ip="10.0.0.100" \ # Virtual IP Address
> nic="eth1" \
> cidr_netmask="24" \
> op start interval="0s" timeout="60s" \
> op monitor interval="5s" timeout="20s" \
> op stop interval="0s" timeout="60s"
crm(live)configure# show # make sure
node www01.server.world
node www02.server.world
primitive vip ocf:heartbeat:IPaddr2 \
params ip="10.0.0.100" nic="eth1" cidr_netmask="24" \
op start interval="0s" timeout="60s" \
op monitor interval="5s" timeout="20s" \
op stop interval="0s" timeout="60s"
property $id="cib-bootstrap-options" \
dc-version="1.1.2-f059ec7ced7a86f18e5490b67ebf4a0b963bccfe" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
no-quorum-policy="ignore" \
stonith-enabled="false"
rsc_defaults $id="rsc-options" \
resource-stickiness="INFINITY" \
migration-threshold="1"
crm(live)configure# commit #
enable settings
crm(live)configure# exit
bye
[3] Execute crm_mon and make
sure status.
[root@www01 ~]# crm_mon
============
Last updated: Fri Jul 15 20:59:16 2011
Stack: openais
Current DC: www01.server.world - partition with quorum
Version: 1.1.2-f059ec7ced7a86f18e5490b67ebf4a0b963bccfe
2 Nodes configured, 2 expected votes
1 Resources configured.
============
Online: [ www01.server.world www02.server.world ]
vip (ocf::heartbeat:IPaddr2): Started www01.server.world
[4] Make sure to ping to Virtual
IP Address if it answers.
[root@www01 ~]# ping 10.0.0.100
PING 10.0.0.100
(10.0.0.100) 56(84) bytes of data.
64 bytes from
10.0.0.100: icmp_seq=1 ttl=64 time=0.016 ms
64 bytes from
10.0.0.100: icmp_seq=2 ttl=64 time=0.008 ms
64 bytes from
10.0.0.100: icmp_seq=3 ttl=64 time=0.009 ms
64 bytes from
10.0.0.100: icmp_seq=4 ttl=64 time=0.025 ms
64 bytes from
10.0.0.100: icmp_seq=5 ttl=64 time=0.014 ms
64 bytes from
10.0.0.100: icmp_seq=6 ttl=64 time=0.008 ms
--- 10.0.0.100 ping
statistics ---
6 packets
transmitted, 6 received, 0% packet loss, time 5149ms
rtt min/avg/max/mdev
= 0.008/0.013/0.025/0.006 ms
Pacemaker - Cluster
Configuration for httpd
Configure Clustering for httpd. Stop httpd on both Hosts.
This example shows to configure on the environment like
follows.
(1) www01.server.world ( eth0 [192.168.1.60], eth1 [10.0.0.60] )
(2) www02.server.world ( eth0 [192.168.1.61], eth1 [10.0.0.61] )
This example uses eth0 for inter-connection and uses eth1
for service provider.
[1] Set Virtual IP Address first. (done above)
[2] Enable server-status on httpd. Set it on both
Hosts.
[root@www01 ~]# vim/etc/httpd/conf/httpd.conf
# line 921-926: uncomment and chnage acceses permittion
<Location /server-status>
SetHandler server-status
Order deny,allow
Deny from all
Allow from
127.0.0.1 10.0.0.0/24
</Location>
[3] Configure Clustering. Set it
on a Host.
[root@www01 ~]# crm configure
crm(live)configure# primitive httpd ocf:heartbeat:apache \
> params configfile="/etc/httpd/conf/httpd.conf" \
> port="80" \
> op start interval="0s" timeout="60s" \
> op monitor interval="5s" timeout="20s" \
> op stop interval="0s" timeout="60s"
crm(live)configure# group webserver vip httpd # create a group
crm(live)configure# show #
confirm settings
node www01.server.world
node www02.server.world
primitive httpd ocf:heartbeat:apache \
params configfile="/etc/httpd/conf/httpd.conf" port="80" \
op start interval="0s" timeout="60s" \
op monitor interval="5s" timeout="20s" \
op stop interval="0s" timeout="60s"
primitive vip ocf:heartbeat:IPaddr2 \
params ip="10.0.0.100" cidr_netmask="24" \
op start interval="0s" timeout="60s" \
op monitor interval="5s" timeout="20s" \
op stop interval="0s" timeout="60s"
group webserver vip httpd
property $id="cib-bootstrap-options" \
dc-version="1.1.2-f059ec7ced7a86f18e5490b67ebf4a0b963bccfe" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
no-quorum-policy="ignore" \
stonith-enabled="false"
rsc_defaults $id="rsc-options" \
resource-stickiness="INFINITY" \
migration-threshold="1"
crm(live)configure# commit #
enable settings
crm(live)configure# exit
bye
[4]Start the below services on
both the hosts
Node1
[root@www01]# /etc/init.d/httpd restart
[root@www01]# chkconfig httpd on
[root@www01]# /etc/init.d/corosync restart
Node2
[root@www02]# /etc/init.d/httpd restart
[root@www02]# chkconfig httpd on
[root@www02]# /etc/init.d/corosync restart
[5] Make sure status with
crm_mon, then httpd starts on a Host.
[root@www01 ~]# crm_mon
============
Last updated: Fri Jul 15 21:03:50 2011
Stack: openais
Current DC: www01.server.world - partition with quorum
Version: 1.1.2-f059ec7ced7a86f18e5490b67ebf4a0b963bccfe
2 Nodes configured, 2 expected votes
1 Resources configured.
============
Online: [ www01.server.world www02.server.world ]
Resource Group: webserver
vip (ocf::heartbeat:IPaddr2): Started www01.server.world
httpd (ocf::heartbeat:apache): Started www01.server.world
[6] Access to Virtual IP address,
then active Host answers like follows.
[7] Stop httpd forcely (shutdown
the system) on active Host www01.server.world , then swith to
www02.server.world an once like follows.
Pacemaker -
Cluster Configuration for Vsftpd
Configure Clustering for Vsftpd. Stop Vsftpd on both Hosts.
This example shows to configure on the environment like
follows.
(1) www01.server.world ( eth0 [192.168.1.60], eth1 [10.0.0.60] )
(2) www02.server.world ( eth0 [192.168.1.61], eth1 [10.0.0.61] )
This example uses eth0 for inter-connection and uses eth1
for service provider.
[1] Set Virtual IP Address first. (Like above) if the
same node1 you going to use you should chage vip to vip1 in all area
[2] Configure Clustering. Set it
on a Host. By the way, if you'd like to configure clustering for ProFTPD or Pure-FTPd, configuration is
the same with follows. Simply replace "vsftpd" to "proftpd"
or "pure-ftpd" on following config.
[root@www01 ~]# crm configure
crm(live)configure# primitive vsftpd lsb:vsftpd \
> op start interval="0s" timeout="60s" \
> op monitor interval="5s" timeout="20s" \
> op stop interval="0s" timeout="60s"
crm(live)configure# group ftpserver vip vsftpd # create a group
crm(live)configure# show #
confirm settings
node www01.server.world
node www02.server.world
primitive vip ocf:heartbeat:IPaddr2 \
params ip="10.0.0.100" nic="eth1" cidr_netmask="24" \
op start interval="0s" timeout="60s" \
op monitor interval="5s" timeout="20s" \
op stop interval="0s" timeout="60s"
primitive vsftpd lsb:vsftpd \
op start interval="0s" timeout="60s" \
op monitor interval="5s" timeout="20s" \
op stop interval="0s" timeout="60s"
group ftpserver vip vsftpd
property $id="cib-bootstrap-options" \
dc-version="1.1.2-f059ec7ced7a86f18e5490b67ebf4a0b963bccfe" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
no-quorum-policy="ignore" \
stonith-enabled="false"
rsc_defaults $id="rsc-options" \
resource-stickiness="INFINITY" \
migration-threshold="1"
crm(live)configure# commit #
enable settings
crm(live)configure# exit
bye
[3] Make sure status with
crm_mon, then Vsftpd starts on a Host.
[root@www01 ~]# crm_mon
============
Last updated: Fri Jul 15 21:12:09 2011
Stack: openais
Current DC: www01.server.world - partition with quorum
Version: 1.1.2-f059ec7ced7a86f18e5490b67ebf4a0b963bccfe
2 Nodes configured, 2 expected votes
1 Resources configured.
============
Online: [ www01.server.world www02.server.world ]
Resource Group: ftpserver
vip (ocf::heartbeat:IPaddr2): Started www01.server.world
vsftpd (lsb:vsftpd): Started www01.server.world
[4] Access to Virtual IP address
with FTP client, then active Host answers like follows.
No comments:
Post a Comment