LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [lvs-users] Table Insertion

To: Horms <horms@xxxxxxxxxxxx>
Subject: Re: [lvs-users] Table Insertion
Cc: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
From: Wensong Zhang <wensong@xxxxxxxxxxxx>
Date: Mon, 21 Feb 2000 16:11:49 +0800 (CST)


On Thu, 17 Feb 2000, Horms wrote:

> I assume that the timeout is tunable, though reducing the
> timeout could have implications for prematurely 
> dropping connections. Is there a possibility of implementing
> random SYN drops if too many SYN are received as I believe
> is implemented in the kernel TCP stack.
> 

Yup, I should implement random early drop of SYN entries long time ago as
Alan Cox suggested. Actually, it would be simple to add this feature into
the existing IPVS code, because the slow timer handler is activated every
second to collect stale entries. I just need to some code to that handler,
if over 90% (or 95%) memory is used, run drop_random_entry to randomly
tranverse 10% (or 5%) entries and drop the SYN-RCV entries in them.

> 
> I have a few more questions - isn't it wonderful the ideas
> that get thrown around on site.
> 
> 
> If persistent connections are being used and a client is
> cached but doesn't have any active connections does
> this count as a connection as far as load balancing,
> particularly lc and wlc is concerned. I am thinking
> no. This being the case, is the memory requirement for each
> client that is cached but has no connections 128bytes as
> per the memory required for a connection.
> 

The reason that the existing code uses one template and creates different
entries for different connections from the same client is to manage the
state of different connections from the same client, and it is easy to
seemlessly add into existing IP Masquerading code. If only one template
is used for all the connections from the same client, the box receives a
RST packet and it is impossible to identify from which connection.

> 
> Is the connection scheduling, rr, wlc... done on a per virtual
> or global basis. In either case would it be possible to
> specify scheduling pools such that a set of virtuals
> are scheduled together, independent of another set of virtuals.
> It would seem that for this to work the virtuals that
> are grouped together would need to have the 
> same set of real servers.
> 
> 
> Currently each virtual is defined to have a set of real servers.
> All server specifications, both real and virtual are done
> individually. The client I have been working with this
> week has a _very_ large number of virtual servers
> and a relatively small number of real servers. The virtual
> servers are set up so they share a common "pool" of
> real servers. So we have n virtual servers sharing the
> same m real servers where n >> m. It is possible to
> have different pools but a virtual will never use
> real servers from different pools. Would it be possible
> to aggregate server specifications - both real and virtual -
> to limit the number of values that need to be added using
> ipvsadm.
> 
> For instance something like:
> 
> Virtual server(s): 10.1.0.0/23
> Real server(s): 10.0.0.0/29
> 
> I guess what I am trying to say is can host entries be
> expanded out to include network entries. This would
> greatly simplify configuration on sites with a large
> number of virtuals and indeed a large number of real servers
> if the virtual servers have the same real servers.
> I believe that - in particular with direct routing this
> should be quite straight forward - as it should be
> a matter of making a bit masked match instead of
> an exact match and in the case where there are a large
> number of entries this should drastically reduce the number
> of comparisons that need to be made.
> 

Since the hash table of virtual service is used, it won't do a lot of
comparisons. :)

> I would be more than happy to work on patching the code
> to do this, but I would like some indication of weather
> or not it is possible/plausible.
> 

Well, I perfer the configuration is per virtual service, and a large
number of virtual services with the same servers might be an exception. :)
Anyway, if your patch can help the interface and code neat, there is no
problem to add in it.

Regards,

Wensong




<Prev in Thread] Current Thread [Next in Thread>