Sunday 20 April 2014

High Availability: Configure Piranha for HTTP, HTTPS and MySQL


Piranha is a simple yet powerful tool to manage virtual IP and service with its web-based GUI.
As refer to my previous post on how to install and configure Piranha for HTTP service: http://blog.secaserver.com/2012/07/centos-configure-piranha-load-balancer-direct-routing-method/, in this post we will complete over the Piranha configuration with HTTP and HTTPS load balancing using direct-routing with firewall marks and MySQL load balancing using direct-routing only.
HTTP/HTTPS will need to be accessed by users via virtual public IP 130.44.50.120 while MySQL service will be accessed by web servers using virtual private IP 192.168.100.30. Kindly refer to picture below for the full architecture:
All Servers
SELINUX must be turned off on all servers. Change the SELINUX configuration file at /etc/sysconfig/selinux:
SELINUX=disabled
Load Balancers
1. All steps should be done in both servers unless specified. We will install Piranha and other required packages using yum:
$ yum install piranha ipvsadm mysql -y
2. Open firewall ports as below:
$ iptables -AINPUT -mtcp -ptcp --dport3636-jACCEPT
$ iptables -AINPUT -mtcp -ptcp --dport80-jACCEPT
$ iptables -AINPUT -mtcp -ptcp --dport443-jACCEPT
$ iptables -AINPUT -mtcp -ptcp --dport539-jACCEPT
$ iptables -AINPUT -mudp -pudp --dport161-jACCEPT
3. Start all required services and make sure they will auto start if server reboot:
$ service piranha-gui start
$ chkconfig piranha-gui on
$ chkconfig pulse on
4. Run following command to set password for user piranha. This will be used when accessing the web-based configuration tools:
$ piranha-passwd
5. Turn on IP forwarding. Open /etc/sysctl.conf and make sure following line has value 1:
net.ipv4.ip_forward = 1
And run following command to activate it:
$ sysctl -p
6. Check whether iptables is loaded properly as the kernel module:
$ lsmod|grepip_tables
ip_tables 177333iptable_filter,iptable_mangle,iptable_nat
7. Since we will need to serve HTTP and HTTPS from the same server, we need to group the traffic to be forwarded to the same destination. To achieve this, we need to mark the packet using iptables and so it being recognized correctly on the destination server. Set the iptables rules to mark all packets which destined for the same server as “80″:
$ iptables -tmangle -APREROUTING -ptcp -d130.44.50.120/32--dport80-jMARK --set-mark80
$ iptables -tmangle -APREROUTING -ptcp -d130.44.50.120/32--dport443-jMARK --set-mark80
Load Balancer #1
1. Check the IP address is correctly setup:
$ ip a | grep inet
inet 130.44.50.121/28 brd 110.74.131.15 scope global eth0
inet 192.168.100.41/24 brd 192.168.10.255 scope global eth1
2. Login into Piranha at http://130.44.50.121:3636/. Login as user piranha and password which has been setup in step #4 of Load Balancers section.
3. Enable redundancy. Go to Piranha > Redundancy > Enable.
4. Enter the IP information as below:
Redundant server public IP : 130.44.50.122
Monitor NIC links for failures : Enabled
Use sync daemon : Enabled
Click ‘Accept’.
5. Go to Piranha > Virtual Servers > Add > Edit. Add information as below and click ‘Accept’:


6. Next, go to Real Server. This we will put the IP address of all real servers that serve HTTP. Fill up all required information as below:

7. Now we need to do the similar setup to HTTPS. Just change the port number for ‘Application port’ to 443. For Real Server, change the real server’s destination port to 443.
8. For MySQL virtual server, enter information as below:


9. For MySQL real servers, enter information as below:


10. Configure monitoring script for MySQL virtual server. Click on ‘Monitoring Script’ and configure as below:


11. Setup the monitoring script for mysql:
$ vim/root/mysql_mon.sh
And add following line:
#!/bin/sh
USER=monitor
PASS=M0Npass5521
####################################################################
CMD=/usr/bin/mysqladmin

IS_ALIVE=`$CMD-h$1-u$USER-p$PASSping|grep-c"alive"`

if["$IS_ALIVE"= "1"]; then
echo"UP"
else
echo"DOWN"
fi
12. Change the script permission to executable:
$ chmod755/root/mysql_mon.sh
13. Now copy over the script and Piranha configuration file to load balancer #2:
$ scp/etc/sysconfig/ha/lvs.cf lb2:/etc/sysconfig/ha/lvs.cf
$ scp/root/mysql_mon.sh lb2:/root/
14. Restart Pulse to activate the Piranha configuration in LB#1:
$ service pulse restart
Load Balancer #2
In this server, we just need to restart pulse service as below:
$ chkconfig pulse on
$ service pulse restart
Database Cluster
1. We need to allow the MySQL monitoring user from nanny (load balancer) in the MySQL cluster. Login into MySQL console and enter following SQL command in one of the server:
mysql>GRANTUSAGEON*.*TOmonitor@'%'IDENTIFIED BY 'M0Npass5521';
2. Add the virtual IP manually using iproute:
$ /sbin/ip addradd 192.168.100.30 dev eth1
3. Add following entry into /etc/rc.local to make sure the virtual IP is up after boot:
$ echo'/sbin/ip addr add 192.168.100.30 dev eth1'>>/etc/rc.local
Attention: If you restart the interface that hold virtual IP in this server, you need to execute step #2 to bring up the virtual IP manually. VIPs can not be configured to start on boot.
4. Check the IPs in the server. Example below was taken from server Mysql1:
$ ip a | grep inet
inet 130.44.50.127/24 brd 130.44.50.255 scope global eth0
inet 192.168.100.33/24 brd 192.168.100.255 scope global eth1
inet 192.168.100.30/32 scope global eth1
Web Cluster
1. On each and every server, we need to install a package called arptables_jf from yum. We will used this to manage our ARP tables entries and rules:
$ yum installarptables_jf -y
2. Add following rules respectively for every server:
Web1:
arptables -AIN -d130.44.50.120 -jDROP
arptables -AOUT -d130.44.50.120 -jmangle --mangle-ip-s130.44.50.123
Web 2:
arptables -AIN -d130.44.50.120 -jDROP
arptables -AOUT -d130.44.50.120 -jmangle --mangle-ip-s130.44.50.124
Web 3:
arptables -AIN -d130.44.50.120 -jDROP
arptables -AOUT -d130.44.50.120 -jmangle --mangle-ip-s130.44.50.125
3. Enable arptables_jf to start on boot, save the rules and restart the service:
$ service arptables_jf save
$ chkconfig arptables_jf on
$ service arptables_jf restart
4. Add the virtual IP manually into the server using iproute command as below:
$ /sbin/ip addradd 130.44.50.120 dev eth0
5. Add following entry into /etc/rc.local to make sure the virtual IP is up after boot:
$ echo'/sbin/ip addr add 130.44.50.120 dev eth0'>>/etc/rc.local
Attention: If you restart the interface that hold virtual IP in this server, you need to execute step #4 to bring up the virtual IP manually. VIPs can not be configured to start on boot.
6. Check the IPs in the server. Example below was taken from server Web1:
$ ip a | grep inet
inet 130.44.50.123/28 brd 110.74.131.15 scope global eth0
inet 130.44.50.120/32 scope global eth0
inet 192.168.100.21/24 brd 192.168.100.255 scope global eth1
You are now having a complete high availability MySQL and HTTP/HTTPS service with auto failover and load balance features by Piranha using direct routing method.
In this tutorial, I am not focusing on HTTPS because in this test environment I do not have SSL setup correctly and do not have much time to do that. By the way, you may use following BASH script to monitor HTTPS from Piranha (nanny):
#!/bin/bash

if[$#-eq0]; then
echo"host not specified"
exit1
fi

curl -s--insecure\
--cert/etc/crt/hostcert.pem \
--key/etc/crt/hostkey.pem \
&>/dev/null

if[$?-eq0]; then
echo"UP"
else
echo"DOWN"
fi
I hope this tutorial could be useful for some guys out there!

No comments:

Post a Comment