RE: Testprogram

To: " users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: RE: Testprogram
From: "Jernberg, Bernt" <Bernt.Jernberg@xxxxxxxxx>
Date: Mon, 24 Feb 2003 10:05:52 +0100
> I'm guessing that you wish to test the output of the 
> whole cluster 
> here, not just the director...

Yes, correct.

> How do you mean that the load 'does not balance evenly' - are 
> the load values 
> on the servers drastically different? or are you looking at 
> the ActiveConns 
> data in ipvsadm -L ?

Looking at the output of ipvsadm -L.

The distribution between the three were like 3000, 1500 and 1400 in number of 
active connections. One realserver seemed to have a constant number of active 
connections with very few or zero inactive????? 

> If it's the former, could you describe 
> the cluster? 
> (filesystem replication?, identical machines?, etc)

The three realservers are configured identicaly. I can't reveal anything of the 
configuration. I can say that they running somekind of *x :).

Before testing, I ran ipvsadm --set 300 300 300 on both lvs nodes. The 
persistent timeout specified in keepalived.conf is 30s. Wrong thing to do? 

> The actual allocation of connections will depend on the 
> scheduler - which one 
> are you using? (rr/wrr/lc/wlc/etc?).

I have testet with both wrr and wlc. We're using fwmark with ftp so, as far as 
I know, testing from one ipaddress within the persistent timeout, will send all 
new connections to the same realserver? We performed one of the tests by 
looping around wget downloading small and large files randomly.

I'd like to simulate >10000 simultaneous users. Both from slow modemconnection 
and fast LAN ones.    

> httperf will do both - it is similar to apachebench, but 
> allows finer control 
> over the request load and gives lots of detail about the response 
> characteristics. you can get httperf from 
> O'Rourke and Keefe have written a paper on benchmarking LVS - 
> . It 
> provides a good 
> amount of detail about how they performed their tests.

Thanks, I will check that out.

Regards /Bernt
<Prev in Thread] Current Thread [Next in Thread>