LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [lvs-users] Table Insertion

To: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Subject: Re: [lvs-users] Table Insertion
From: Horms <horms@xxxxxxxxxxxx>
Date: Thu, 17 Feb 2000 11:31:51 -0500
On Thu, Feb 17, 2000 at 05:29:27PM +0800, Wensong Zhang wrote:
> There is state management in connection entries in the IPVS table. The
> connection in different states has different timeout value, for
> example, the timeout of the SYN_RECV state is 1 minute, the timeout of
> the ESTABLISHED state is 15 minutes (the default). Each connection
> entry occupy 128 bytes effective memory. Supposing that there is 128
> Mbytes free memory, the box can have 1 million connection entries. The
> over 16,667 packet/second rate SYN flood can make the box run out of
> memory, and the syn-flooding attacker probably need to allocate T3
> link or more to perform the attack. It is difficult to syn-flood a
> IPVS box. It would be much more difficult to attach a box with more
> memory.

Thanks that makes a lot of sense, though of course a healthy
SYN flood could take up resources that would otherwise
be used for valid connections on a loaded machine. Having
said that the machines in question have penalty of memory
so I think we should be ok.

I assume that the timeout is tunable, though reducing the
timeout could have implications for prematurely 
dropping connections. Is there a possibility of implementing
random SYN drops if too many SYN are received as I believe
is implemented in the kernel TCP stack.

> > A second, related question is if a packet is forwarded to
> > a server, and this server has failed and is sunsequently
> > removed from the available pool using something like
> > ldirectord. Is there a window where the packet
> > can be retransmitted to a second server. This would
> > only really work if the packet was a new connection.
> 
> Yes, it is true. If the primary load balaner fails over, all the
> established connections will be lost after the backup takes over. We
> probably need to investigate how to exchange the state (connection
> entries) periodically between the primary and the backup without too
> much performance degradation.

That would be nice as in the case of an end user it would
avoid a potential reload.


I have a few more questions - isn't it wonderful the ideas
that get thrown around on site.


If persistent connections are being used and a client is
cached but doesn't have any active connections does
this count as a connection as far as load balancing,
particularly lc and wlc is concerned. I am thinking
no. This being the case, is the memory requirement for each
client that is cached but has no connections 128bytes as
per the memory required for a connection.


Is the connection scheduling, rr, wlc... done on a per virtual
or global basis. In either case would it be possible to
specify scheduling pools such that a set of virtuals
are scheduled together, independent of another set of virtuals.
It would seem that for this to work the virtuals that
are grouped together would need to have the 
same set of real servers.


Currently each virtual is defined to have a set of real servers.
All server specifications, both real and virtual are done
individually. The client I have been working with this
week has a _very_ large number of virtual servers
and a relatively small number of real servers. The virtual
servers are set up so they share a common "pool" of
real servers. So we have n virtual servers sharing the
same m real servers where n >> m. It is possible to
have different pools but a virtual will never use
real servers from different pools. Would it be possible
to aggregate server specifications - both real and virtual -
to limit the number of values that need to be added using
ipvsadm.

For instance something like:

Virtual server(s): 10.1.0.0/23
Real server(s): 10.0.0.0/29

I guess what I am trying to say is can host entries be
expanded out to include network entries. This would
greatly simplify configuration on sites with a large
number of virtuals and indeed a large number of real servers
if the virtual servers have the same real servers.
I believe that - in particular with direct routing this
should be quite straight forward - as it should be
a matter of making a bit masked match instead of
an exact match and in the case where there are a large
number of entries this should drastically reduce the number
of comparisons that need to be made.

I would be more than happy to work on patching the code
to do this, but I would like some indication of weather
or not it is possible/plausible.


Lucky last...
On the topic of inserting large number of values using ipvsadm.
What is the limit on the number of entries that can be entered,
and can this be increased. I seem to run into a problem somewhere
between 16k and 27k entries. I am assuming that the entries
entered are stored in different data structures to the current connections
but a colleague of mine is having trouble finding the data structures
in the source. Of course this wouldn't
be an issue if we can aggregate entries.


-- 
Horms

----------------------------------------------------------------------
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
To unsubscribe, e-mail: lvs-users-unsubscribe@xxxxxxxxxxxxxxxxxxxxxx
For additional commands, e-mail: lvs-users-help@xxxxxxxxxxxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>