LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

[lvs-users] Re: IPVS Benchmarking

To: Joseph Mack <mack@xxxxxxxxxxx>
Subject: [lvs-users] Re: IPVS Benchmarking
Cc: Horms <horms@xxxxxxxxxxxx>, lvs-users@xxxxxxxxxxxxxxxxxxxxxx
From: Julian Anastasov <uli@xxxxxxxxxxxxxxxxxxxxxx>
Date: Sun, 23 Jan 2000 09:20:25 +0200 (EET)
        Hi Joe,

On Sat, 22 Jan 2000, Joseph Mack wrote:

> On Wed, 12 Jan 2000, Horms wrote:
> 
> > >   Why do you think that NAT is so slow? Any benchmarks?
> > 
> > Not yet, I am hoping to be able to get some numbers next week.
> 
...
> Switch/Network:
> 
> The realservers (with EEPRO100 NIC) connected through the 
> switch to the director and client (both with FA310TX NICs).
> The maximum throughput of client-realserver would be 50Mbs. 
> The connection through the switch had a measured latency 
> of 0.3msec with throughput of 50Mbs, indicating that the 
> switch was not a rate limiting step in the connection. 
> Supposedly this 8-port switch can handle 4x100Mbps 
> connections and would not be running near to its capacity.
> 
> Test setup:
> 
> The tests used 1 client, a director with 1 NIC and 1..6
> realservers. The throughput was tested 3 ways by
> connecting from the client to the realserver(s)
> 
> 1. directly
> 2. by VS-NAT 
> 3. by VS-DR
> 

        The wrong in your setup is that in VS/NAT mode the Director is
used with 1 NIC. Assume that the requests from the client and the answers
from the real servers use the same link twice:

Request:
client -> director
director -> real (in demasq direction)

Answer:
real -> director (its default gateway)
director -> client (in masq direction)

> Conclusion:
>
> With VS-NAT, the director saturates on a 50Mbps link 

        In this test if you use 100Mbps NICs the max throughput can be
50Mbps measured from applications: masq plus demasq direction.

        Please correct your VS/NAT test using 2 NICs: 1 link to the client
and 1 link to the real servers. Make sure that the client and the real
servers reach the Director through different NIC:

For example:

Client:
eth0: 192.168.0.1

Director:
eth0: 192.168.0.2/24 (client net)
eth0:0 192.168.0.3/32 VIP
eth1: 192.168.1.2/24 (server net)
masq 192.168.1.0/24 (masq the servers)
default gw 192.168.0.100 through eth0
VS configuration: VS/NAT 192.168.0.3 -> 192.168.1.*

Real server(s):
eth0: 192.168.1.4/24 ...
default gateway: 192.168.1.2

        Is this setup correct ? It is the default MASQ setup for max
throughput. The path for the packets is the same as the path for VS/DR and
VS/TUN: each packet (in any direction) uses one link only once.

        The above setup is to test very high input trafic (large
requests). What about testing input requests with size = 10-20% the size
of the answers (httpd traffic), what is the difference between VS/NAT and
VS/DR even when the Director is with one NIC? The two NICs only increase
the request throughput for VS/NAT. So, may be the throughput depends on
the size of the request and the size of the answer.

        What means throughput 120 Mbps for the real servers in the table?
Is the limit 100Mbps for the NICs? Input+Output in Full-Duplex ? I don't
understand something in this table. You report that the throughput from
the client to the real server (directly?) is 50Mbps. If the size of the
request = size of the answer, the max throughput can be 50Mbps (reported
in the real server, half-duplex).

Regards,

Julian Anastasov


----------------------------------------------------------------------
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
To unsubscribe, e-mail: lvs-users-unsubscribe@xxxxxxxxxxxxxxxxxxxxxx
For additional commands, e-mail: lvs-users-help@xxxxxxxxxxxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>