LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [lvs-users] Unbalanced Real Servers

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: [lvs-users] Unbalanced Real Servers
From: Götz Rieger <goetz.rieger@xxxxxx>
Date: Wed, 29 Apr 2009 18:26:07 +0200
Hi Graeme,

Graeme Fowler wrote:
> On Wed, 2009-04-29 at 15:05 +0200, "Götz Rieger" wrote:
>> I'm doing a load test now using The Grinder with four load test clients. The 
>> problem I'm facing is an extremely uneven distribution of ActiveConn on the 
>> real servers:
> 
> OK...
> 
>> [root@lvst1 ~]# ipvsadm -L -n                                          
>> IP Virtual Server version 1.2.1 (size=4096)                            
>> Prot LocalAddress:Port Scheduler Flags                                 
>>   -> RemoteAddress:Port           Forward Weight ActiveConn InActConn  
>> TCP  194.xxx.xxx.xxx:80 rr                                              
>>   -> 192.168.xxx.xxx:80            Masq    1      137        421        
>>   -> 192.168.xxx.xxx:80            Masq    1      203        509        
>>   -> 192.168.xxx.xxx:80            Masq    1      0          630       
> 
> A larger number of ActiveConns indicates a machine which is slower to
> respond (that is, these connections have not reached TIME_WAIT or
> FIN_WAIT), or a machine which has KeepAlive switched on in Apache.

Sure, but the servers and the OS are configured exactly the same. And 
Apache _is_ the same on all nodes because it resides on the NFS-share...

KeepAlive is actually switched on, but again on all web servers.

So I really don't understand this behaviour right now. If the uneven 
distribution would always follow the same pattern, yes. But the servers 
getting high/low load are swapping around.

> Of more significance to you during testing would be:
> 
> ipvsadm -L -n --stats
> ipvsadm -L -n --rate

I had a look at the --stats output as well. It looks like the throughput 
is nearly the same for the real servers.

What concerns me is the web server/s with the high ActiveConns is/are 
actually heavily loaded and the other/s are idle... and this makes the 
whole thing a problem.

Might be some intricacy of TCP/IP I don't know about or the use of 
NFS... but I don't get it.

> BTW I presume you meant "lc" scheduler rather than "ll"?

Err, yes. :-)

Those load test runs where actually ok with the ActiveConns staying the 
same all the time... so should I go with lc?


Thanks for replying.

Goetz

> 
> Graeme
> 
> 
> _______________________________________________
> Please read the documentation before posting - it's available at:
> http://www.linuxvirtualserver.org/
> 
> LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
> Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
> or go to http://lists.graemef.net/mailman/listinfo/lvs-users


_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users

<Prev in Thread] Current Thread [Next in Thread>