LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: file-max/inode-max question

To: Joseph Mack <mack.joseph@xxxxxxx>
Subject: Re: file-max/inode-max question
Cc: LVS List <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
From: Derek Glidden <dglidden@xxxxxxxxxxxxxxx>
Date: Tue, 05 Sep 2000 18:22:42 -0400
Joseph Mack wrote:
> 
> 
> It's not easy being out in front :-)

Yeah, but it sure is more exciting.  :)
 
> I don't think many people are using VS-NAT for heavy duty LVS serving.
> Most are using VS-DR. However the real-servers don't really know whether
> they are in a VS-NAT or VS-DR LVS.

Hmm, I'll have to take a look at using DR and see if its even viable for
our situation.  We needed to get the boxes up "real quick" so I just
picked NAT as the easiest config option to get them up on a very short
time-frame.
 
> You are changing the settings on the real-servers and not the director?

No, I'm changing the settings on the director boxes, because those are
the ones having the problems.  Maybe a bit more background is needed:

we need to set up some load-balancers/failover boxes because our clients
uses NT exclusively, which servers keep crashing.  So they now have two
NT servers for each "service" (i.e. web, email, application, etc) - a
primary and a backup.  For each service, the LVS boxen have an external
alias set up for the external VIP that clients connect to, with weighted
rules set up to redirect all traffic to the internal primary box, unless
it crashes, in which case to redirect to internal backup box until the
primary turns up again.

The problem is, as we turn up a large number of aliases/VIPs (for each
service we add, we have to add a new alias/VIP and a new IPVS rule for
each primary and backup server) and add new IPVS rules, we're seeing
file-nr and inode-nr numbers hit the roof and keep going.  Increasing
the values on the LVS boxen keeps them alive for a while longer, but it
seems the numbers keep climbing and will eventually take the box out. 
(At which point the backup LVS takes over until we can bring the primary
back to sanity.  At least THAT works reliably. :)  Increasing them to
really really big values have kept them alive for a while now, but I'm
not sure this is reasonable since we've backed down the number of IPVS
rules to just a couple of servers and we're still seeing some of this
behaviour.

When all is said and done, we have a pretty big number of *distinct*
portfw and ipvs rules, as opposed to just a couple of rules that each
contain a lot of servers to load balance.  I don't know if we're unique
in using the LVS stuff this way or not...

-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
With Microsoft products, failure is not           Derek Glidden
an option - it's a standard component.      http://3dlinux.org/
Choose your life.  Choose your            http://www.tbcpc.org/
future.  Choose Linux.              http://www.illusionary.com/


<Prev in Thread] Current Thread [Next in Thread>