LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: Re: tuning of LVS director for heavy traffic

To: "'lvs-users@xxxxxxxxxxxxxxxxxxxxxx'" <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: Re: tuning of LVS director for heavy traffic
From: "khiz nms" <khiznms@xxxxxxxxxxxxxx>
Date: 29 Jan 2002 11:07:25 -0000
Hi 
that was very quick  julian ,, thank you

by default the route cache size max is 4096  on RH 6.2
do the guys who run lvs director actually increase the size of the route cache 
.. 
this is rh6.2 with kernel 2.2.19

i got the foll post from a list which goes against change the value of the max 
size
"Please, try to leave gc_thresh and max_size at their default values,
but to decrease gc_min_interval to 1 and to decrease gc_elasticity to
2,3 etc. It should help a bit. As extremal measure you may set
gc_min_interval to 0 and/or to increase gc_thresh. It is better to leave
max_size intact."


Do we also need to play with
/proc/sys/net/core/netdev_max_backlog 

TIA
Khiz

On Tue, 29 Jan 2002 Julian Anastasov wrote :
> 
>       Hello,
> 
> On 29 Jan 2002, khiz  nms wrote:
> 
> >
> > Hi All
> >  before deploying an LVS director , are  there any  
> tuning
> > recommendations for a linux Box, something in terms 
> of the TCp/IP stack
> > or general networking
> >
> > i run a squid box (physically different)  and some 
> times i get
> > 'dst cache overflow' messages and the network just 
> freezes..or ' out of
> > buffer space' messages
> >
> > i want to avoid all such errors for the LVS director. 
> considering that
> > it will be handling fairly heavy traffic
> 
>       Then you will have the same error message with LVS. You
> need to increase the size of the routing cache hash 
> table and
> its related parameters. No need for tuning the socket 
> memory space.
> May be you can specify some CPU affinity for the 
> network devices (2.4),
> may be this can help.
> 
> > hoping for some good advice from the gurus out here
> >
> > TIA
> > Khiz
> 
> Regards
> 
> --
> Julian Anastasov <ja@xxxxxx>
> 
 



<Prev in Thread] Current Thread [Next in Thread>