LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: testlvs - 2nd question

To: Thomas Proell <Thomas.Proell@xxxxxxxxxx>
Subject: Re: testlvs - 2nd question
Cc: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
From: Julian Anastasov <ja@xxxxxx>
Date: Thu, 9 Nov 2000 08:52:32 +0000 (GMT)
        Hello,

On Wed, 8 Nov 2000, Thomas Proell wrote:

> Hi!
>
> Testlvs is up and running now, bringing an average of 2600 packets
> per second on pentium machines under tcp.
> 40 Bytes per packet - that would make about 10 kBps on a 10MBps
> line (yes, ten Mega :)

        The numbers are near the assumed.

> Adding load on any of the machines doesn't change the performance.
>
> Where's the bottleneck? The NICs? The collisions?

        You can test with:

1. ISA/PCI NICs
2. switched/shared hubs
3. 10/100Mbps network
4. many hosts running testlvs
5. different/faster CPUs

        If you test different components you can understand where is
the bottleneck.

> How does testLVS work? How can I influence the number of active
> connections? I can influnce the number of simulated clients,

        You can't. There are no established TCP connections using
testlvs. testlvs loads the LVS even with inactive connections, i.e.
inactive does not mean you don't receive packets for them. May be
the name "inactive" is wrong. It is very difficult to select another
name for these counters. The problem is that you can't say in which
TCP state the connection receives more packets. LVS treats as active
the established connections because it is assumed that this state
loads the real service. But testlvs does not load the real service.
And you can't easily make LVS to enter established TCP state. You
have to talk TCP. With a packet generator this is not possible. May
be you worry how the packets will be scheduled considering the
number of the "active" connections. In the case with testlvs the
"inactive" connections can be very active, believe me :)

> but this is a different question.
>
>
> Thomas


Regards

--
Julian Anastasov <ja@xxxxxx>



<Prev in Thread] Current Thread [Next in Thread>