LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: How can i run win 2000 as real server in Direct Routing?

To: Wensong Zhang <wensong@xxxxxxxxxxxx>
Subject: Re: How can i run win 2000 as real server in Direct Routing?
Cc: <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
From: Julian Anastasov <ja@xxxxxx>
Date: Fri, 5 Jan 2001 00:44:43 +0000 (GMT)
        Hello,

On Thu, 4 Jan 2001, Wensong Zhang wrote:

> Hi,
>
> >From the implementation viewpoint, Windows Load Balancing Service
> implements a local filter between the NIC driver and the TCP/IP stack.
> There is a filtering function, which can map incoming packets to the
> cluster ndoe based on the source IP address and port number and only
> pass the packets to the upper layer in one node. If some nodes fails or
> new nodes are added, all the cluster nodes need neogociate a new
> filtering function. I guess that each node may keep states of its
> established connections, so that even under a new filtering function,
> the packets destined for the local node can still be passed to the upper
> layer, the new filtering function only works on new connections (SYN
> packets). However, in persistent services, the new filtering function
> will make all the persistence broken, no matter that they are connected
> to the alive nodes or the failed nodes. It affects all the nodes. I see
> that's the big shortcoming, but the distributed fileter architecture can
> avoid the failure of dispatcher.
>
> It is simple to write a filter, but it must be complicated to write
> convergence stuff (negociation for a new function).
>
> I see that maybe we can learn somethings from it, investigate some
> mechanism to implement active-active load balancers.

        Hm, interesting reading. I'm thinking about the load
percentages and whether the collisions or other factors lead to
situations where two real servers can reply to same request.

        I'm looking in my stats for the banner servers that serve only
static images, LVS/DR. Wow, I have the stats in packets/sec and not in
bytes/sec. Can you believe, the input packets are 90% of the output
packets. We don't talk for the bytes. So, if the web receives 90 packets
and send 100 packets in LVS/DR and if the real servers are 10 each web
in WIN2K/NLB mode will receive 90*10 packets and will send 100 packets,
9:1, a picture very different for the assumptions about the short
web requests and the long answers. I'm not sure if the packet size makes
any sense in the handling. IMO, we even don't waste CPU cycles in
checksuming when forwarding the packets. In the real servers may be the
cards with hardware checksuming help the WIN2K/NLB mode not to waste
CPU cycles in checksuming the (N-1)/N of the incoming packets, i.e. the
packets that will not be accepted locally.

        OK, now I'm looking in the other my web servers that can connect
to the databases. Can you believe, input packets are 98% of the output.
Again, all hosts are in LVS/DR setup but this is not only a LVS/DR
traffic to/from the clients.

        So, it seems all my real servers have equal number for in
and out packets. If I have 32 real servers for each 32 packets
I will send only one output packet in WIN2K/NLB mode. Oh, yes, there
are full-duplex links too.

        Guys, what show your stats for the incoming and outgoing
packets in your real servers? And only for LVS/DR traffic, i.e. static
web for example or traffic that includes packets to and from the
client only. Are my assumptions correct? May be for FTP the picture
will be different, i.e. small request with long data. But there are
acks too, not every packet contains data. But the incoming packets
must be a small number compared to the output packets in long
FTP downloads, you know, delayed acks, etc.

> Cheers,
>
> Wensong


Regards

--
Julian Anastasov <ja@xxxxxx>



<Prev in Thread] Current Thread [Next in Thread>