LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: Expected Failover Time and Configuration Limits.

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: Expected Failover Time and Configuration Limits.
From: pb <peterbaitz@xxxxxxxxx>
Date: Tue, 16 Sep 2003 11:03:42 -0700 (PDT)
We us Red Hat Linux 7.3 with kernel 2.4.18 and
Piranha-0.7.0-3 and have four Public VIP's which
failover (plus the Private NAT router VIP) and
failover times are 6-10 seconds for Pulse heartbeat,
and all VIP's failover and back correctly. Use two
Piranha clusters, one for LDAP servers and one for
Mail servers, three servers per cluster. 

Peter

--- ntadmin@xxxxxxxxxxxx wrote:
> Hello, a couple of questions...
> 
> We are running a dual lvs-NAT setup (2x dual 733
> dell systems with 1gb ram)
> Each machine has 2 active nics one with our
> 216.177.xxx.xxx (public network side) and the other
> with 192.168.xxx.xxx (private network)
> 
> Redhat 9 install with modified kernel. (uname -r  >
> 2.4.20-18.7.hidd.ipvs109.cipe154smp)
> ipvsadm --version > ipvsadm v1.21 2002/11/12
> (compiled with popt and IPVS v1.0.9)
> 
> We are running heartbeat+mon for failover between
> directors.
> Our haresources file currently houses 144 IP
> addresses.
> We are also running ospf and zebra on the directors.
> 
> During our early testing and early production our
> failover was virtually unnoticable.  Since we have
> added ospf and zebra as well as the majority of
> those entries in our haresources file our failover
> time has hit somewhere around +/-10 minutes.  Can
> anybody tell me if that would be normal for the
> ammount of resources we are failing over or if it
> hints to a possible problem?  We have been able to
> cut this time a little shorter by clearing the arp
> cache on our routers but do not know if the problem
> is actually arp or not.  Is there an easy way to
> tell if the grat. arp is working?  Also we noticed
> something that seemed quite strange, on the virtual
> interfaces (eth0:##) after about eth0:42 it begins
> skiping every other interface number
> (eth0:42,eth0:44,eth0:46...) and then sometime later
> changes to the odd number interfaces and skips the
> evens.  So with 144 ips we end up with the last
> number in our eth0 series being eth0:237, and
> sometimes during a failover a few of the interfaces
> are not correctly brought down from the failing lvs
> node...  In addition to this we have found that a
> few (3 or 4) of the virtual interfaces with
> real-world IPs are responding to arp requests.  The
> only place on our network where these addresses
> exist is inside of the Load Balancers.  It is my
> understanding that virtual interfaces should not
> respond to arp requests.  Any insight or help you
> could provide would be greatly appreciated.  I can
> provide any information you might need to assist you
> in helping us out.
> 
> Thank You
> Billy Olson
> Systems Administrator
> ReachONE Internet, Inc.
> ntadmin@xxxxxxxxxxxx>
_______________________________________________
> LinuxVirtualServer.org mailing list -
> lvs-users@xxxxxxxxxxxxxxxxxxxxxx
> Send requests to
> lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
> or go to
> http://www.in-addr.de/mailman/listinfo/lvs-users
> 


__________________________________
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com
<Prev in Thread] Current Thread [Next in Thread>