LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: splitting up a packet stream

To: Joseph Mack <mack@xxxxxxxxxxx>
Subject: Re: splitting up a packet stream
Cc: Chris Anderson <chris.anderson@xxxxxxxxxx>, LVS <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
From: Julian Anastasov <uli@xxxxxxxxxxxxxxxxxxxxxx>
Date: Sat, 29 Jul 2000 11:40:25 +0300 (EEST)
        Hello,

On Fri, 28 Jul 2000, Joseph Mack wrote:

> > > >         For UDP the picture is different. We can remove the implicit
>
> ah, we can (in principle, if we wanted to) remove
> 
> 
> >     Yes, this is the current handling. My thought was to implement
> > a new feature: schedule each UDP packet to new real server. But it
> > is not possible for TCP. I.e. something like timeout=~0 for UDP
> > as service flag.
> 
> why was it done the way it is now? To save time making and deleting
> hash table entries?

        For masquerading purposes. To remember the internal and
external host, i.e. for the request and for the reply. For VS/DR
and VS/TUN the default UDP timeout looks useless. The desired effect
can be achieved using the persistent flag. The UDP timeout can be changed
using ipchains but this hurts all UDP services.

        So, currently, the missing feature in LVS is to mark the
UDP service to schedule each packet to different real server.
It is may be difficult for VS/NAT and is may be dangerous. We can
expect problems when the application retransmits the requests. And I'm
wondering whether there are applications that require such feature.
Is the DNS in this group? It looks that this feature is useful only
in situations where one UDP CIP:CPORT sends requests at high rate
and the application protocol allows such scheduling. For example,
when the client is on the same LAN as the cluster hosts. And when
this is the only our UDP client!!! In this case LVS is unusable.

        We know that the local UDP port is not autoselected
cycling in the local port range as for TCP. It is possible one
application to allocate the same UDP port just released from
the previous application. And sometimes the real servers are excluded
from the service. This is very bad when using long UDP timeout.
The client will be blocked for the specified period.

        So, the question is: Is it so useful to allow one UDP
socket to be served from all real servers when the application
allows this, i.e. one request followed by one reply. Each
request packet is independent from the following requests.
But this looks ugly, especially for LVS/NAT. May be such
virtual service can't create many entries CIP:CPORT->VIP:VPORT,
i.e. for different real servers. We can just forward the packets
in VS/DR and VS/TUN without creating entries. The problem is for
LVS/NAT. And may be only RR and WR can work with such per-packet
scheduling without creating entries. Any ideas?

> 
> Joe
> 
> --
> Joseph Mack mack@xxxxxxxxxxx
> 


Regards

--
Julian Anastasov <uli@xxxxxxxxxxxxxxxxxxxxxx>



<Prev in Thread] Current Thread [Next in Thread>