LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: Overloaded connection limit

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: Overloaded connection limit
From: Roberto Nibali <ratz@xxxxxxxxxxxx>
Date: Tue, 03 Aug 2004 10:17:40 +0200
Hi,

A number of the schedulers seem to use an is_overloaded() function that limits the number of connections to twice the server's weight.

For the sake of discussion I'll be referring to the 2.4.x kernel. We would be talking about this piece of jewelry:

static inline int is_overloaded(struct ip_vs_dest *dest)
{
if (atomic_read(&dest->activeconns) > atomic_read(&dest->weight)*2) {
                return 1;
        }
        return 0;
}

And I'm not a bit unsure about the semantics of this is_overloaded regarding it's mathematical background. Wensong, what was the reason to arbitraly us twice the amount of activeconns for the overoad criteria?

1. dest->activeconns has such a short life span, it hardly represents
   nor reflects the current RS load in any way I could imagine.
2. 2.4.x and 2.6.x differ in what they consider a destination to be
   overloaded. IP_VS_DEST_F_OVERLOAD is set when ip_vs_dest_totalconns
   exceeds the upper threshold limit and totalconns means currently
   active + inactive connections which is also kind of unfortunate. And
   yes, there is some more code I haven't mentioned yet.

I'm using the dh scheduler to balance across 3 machines - once the connections exceed twice the weight, it refuses new connections from IP addresses that aren't currently persistent.

What kernel are you talking about, I'm afraid? I've been a bit out of the loop development wise, so bear with me. And 2.4.x and 2.6.x contain similar (although not sync'd ... sigh) code regarding this feature:

ratz@webphish:/usr/src/linux-2.6.8-rc2> grep -r is_overloaded *
net/ipv4/ipvs/ip_vs_sh.c:static inline int is_overloaded(struct ip_vs_dest *dest)
net/ipv4/ipvs/ip_vs_sh.c:           || is_overloaded(dest)) {
net/ipv4/ipvs/ip_vs_lblcr.c:is_overloaded(struct ip_vs_dest *dest, struct ip_vs_service *svc) net/ipv4/ipvs/ip_vs_lblcr.c: if (!dest || is_overloaded(dest, svc)) { net/ipv4/ipvs/ip_vs_dh.c:static inline int is_overloaded(struct ip_vs_dest *dest)
net/ipv4/ipvs/ip_vs_dh.c:           || is_overloaded(dest)) {
net/ipv4/ipvs/ip_vs_lblc.c:is_overloaded(struct ip_vs_dest *dest, struct ip_vs_service *svc)
net/ipv4/ipvs/ip_vs_lblc.c:                 || is_overloaded(dest, svc)) {
ratz@webphish:/usr/src/linux-2.6.8-rc2>
ratz@webphish:/usr/src/linux-2.4.27-rc4> grep -r is_overloaded *
net/ipv4/ipvs/ip_vs_sh.c:static inline int is_overloaded(struct ip_vs_dest *dest)
net/ipv4/ipvs/ip_vs_sh.c:           || is_overloaded(dest)) {
net/ipv4/ipvs/ip_vs_lblcr.c:is_overloaded(struct ip_vs_dest *dest, struct ip_vs_service *svc) net/ipv4/ipvs/ip_vs_lblcr.c: if (!dest || is_overloaded(dest, svc)) { net/ipv4/ipvs/ip_vs_lblc.c:is_overloaded(struct ip_vs_dest *dest, struct ip_vs_service *svc)
net/ipv4/ipvs/ip_vs_lblc.c:                 || is_overloaded(dest, svc)) {
net/ipv4/ipvs/ip_vs_dh.c:static inline int is_overloaded(struct ip_vs_dest *dest)
net/ipv4/ipvs/ip_vs_dh.c:           || is_overloaded(dest)) {
ratz@webphish:/usr/src/linux-2.4.27-rc4>

Asymmetric coding :)

This in itself isn't really a problem, but I can't find this behaviour actually documented anywhere - all the documentation refers to the weights as being "relative to the other hosts" which means there should be no difference between me setting the weights on all hosts to 5 or setting them all to 500.

This is correct. I'm a bit unsure as to what your exact problem is, but a kernel version would already help, although I believe you're using a 2.4.x kernel. Normally the is_overloaded() function was designed to be used by the threshold limitation feature only which is only present as a shacky backport from 2.6.x. I don't quite understand the is_overloaded() function in the ip_vs_dh scheduler, OTOH, I really haven't been using it so far.

Expect some followup on this ;)

Take care,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' | dc
<Prev in Thread] Current Thread [Next in Thread>