Monday, 10 December 2012

About Fail-over & Load balancing & Clustering

What is Fail-over

The goal of fail-over is to allow work that would normally be done by one server to be done by another server should the regular one fail.
For example, Server A responds to all requests unless it has a hardware failure, or someone trips over its network cable, or the data center it is located in burns to the ground.  And if Server A cannot respond to requests, then Server B can take over.
Or if you simply need a service to be highly available, fail-over allows you to perform maintenance on individual servers (nodes) without taking your service off-line.
For fail-over server B would ideally be in a separate data center, or if that wasn't possible you would at least want to try and put it on a separate switch than Server A and on a separate power outlet as well. Basically the more physical separation the better.
BIG FAT NOTE: Failover is not the same as Load Balancing

What is Load balancing

Load balancing lets you spread load over multiple servers. You would want to do this if you were maxing out your CPU or disk IO or network capacity on a particular server.
Alternatives to load balancing include 'scaling' vertically. e.g. getting faster or better hardware such as quicker disks, a faster CPU or a fatter network pipe.

Implementing Fail-over

To implement fail-over you typically need to have your data replicated across multiple machines.  You could do this via rsync+cron for files/directories.  And via something like MySQL replication for databases.
One way to trigger the fail-over is to change the IP address your domain points to.  IP address changes can happen within a few minutes of a DNS server update.  Though if a client PC is caching an IP then it may take a bit longer for that to notice the change.

Implementing load balancing
One simple way to implement load balancing is to split services between servers.  e.g. running the web server on one server and the database server on another.
This way is easy since there are no data replication issues.  e.g. all necessary files are on the web servers, all necessary database data is on the database server.
Another common load balancing option is to have multiple front end servers.  To distribute requests to multiple servers you could setup multiple IP addresses for a particular domain.  Then clients should get all these addresses and to a random one.  Spreading the load around.
Another way to distribute requests is to have a single virtual IP (VIP) that all clients use.  And for the computer on that 'virtual' IP to forward the request to the real servers. eg with haproxy.
People can also implement load balancing via http balancers like mod_proxy_balancer in Apache 2.2 and Pound.

Clustering vs. Load Balancing – What is the difference?

by Amrinder
These 2 terms – clustering and load balancing – are used in the same sense by a majority of IT people with relative impunity.
Clustering has a formal meaning. A cluster is a group of resources that are trying to achieve a common objective, and are aware of one another. Clustering usually involves setting up the resources (servers usually) to exchange details on a particular channel (port) and keep exchanging their states, so a resource’s state is replicated at other places as well. It usually also includes load balancing, wherein, the request is routed to one of the resources in the cluster as per the load balancing policy.
Load balancing can also happen without clustering when we have multiple independent servers that have same setup, but other than that, are unaware of each other. Then, we can use a load balancer to forward requests to either one server or other, but one server does not use the other server’s resources. Also, one resource does not share its state with other resources.
Each load balancer basically does following tasks:
  1. Continuously check which servers are up.
  2. When a new request is received, send it to one of the servers as per the load balancing policy.
  3. When a request is received for a user who already has a session, send the user to the *same* server (This part is important, as otherwise user would keep going between different servers, but not able to really do any work). This part is not required for serving static pages, in that case, there are no user sessions.
What does it mean from a user’s perspective? Which one is better?
Every time some one asks a generic question – which one is better, the answer is invariably “It depends”. This isn’t political equivocation, but simply restating the fact that if one was better than the other in all circumstances, then the other wouldn’t exist.
Clustering saves the user’s state, and is more transparent to the user, but is harder to setup, and is very resource specific. Different application servers have different clustering protocols, and don’t necessarily work out of the box (don’t you believe any of that marketing fluff). Load balancing is comparatively more painless, and relatively more independent of application servers.
From a user’s perspective, it means that if the user is doing something on the application, and that server goes down, then depending upon whether the system is doing clustering or load balancing, the user observes different behavior. If the system is clustered, the user may be able to continue doing the transaction, and may not even realize that the server has gone down. If the system is load balanced without clustering, that means that the user’s state will likely be lost, and the user will be simply sent to the other server(s) to restart transaction. The user has lost some work.
Hopefully that clarifies it somewhat.

No comments:

Post a Comment