LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: Backup and local nodes

To: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Subject: Re: Backup and local nodes
From: Sébastien Bonnet <Sebastien.Bonnet@xxxxxxxxxxx>
Date: Thu, 17 Jan 2002 10:10:37 +0100
> What is the kernel version?

[root@cluster root]# uname -r
2.4.7-10

> Show us the 'ipvsadm -Ln' output please.

[root@cluster root]# ipvsadm -Ln
IP Virtual Server version 0.8.1 (size=65536)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port             Forward Weight ActiveConn InActConn
TCP  172.22.48.212:80 lc
  -> 172.16.0.1:80                  Local   80     0          0
  -> 172.16.0.102:80                Masq    1      0          0
  -> 172.16.0.101:80                Masq    2      0          0

Again, VIP=172.22.48.212, NAT LAN=172.16.0.0/24, 172.16.0.1=internal
side of master director.

> Can you provide me with an output of 'ipvsadm -Ln' after 100 http
> requests? So I can see if you use persistency, what scheduler you
> use and if there is really a bug in the LVS scheduler (which I
> don't think)

I yesterday posted an update to my initial message. Actually, the
problem wasn't from LVS/piranha, but from the config of apache on the
master director which wasn't using keepalive http connections, wheres
the two other realservers were.

Problem solved !

>  > My other problem is still using the main director when the backup
>  > takes over. Supposing the backup redirects a connection to the
>  > director's web server (using the NATed network), how will this box
>  > be able to answer whereas it does not have the correct gateway
>  > (which is set to the external network, not the NATed one).
> 
> You use LVS_NAT for your RS and LVS_DR for your director maybe?

No no, see ipvsadm output above.

> The piranha mailing list is best suited for that.

The full story about that, with nice outputs and drawings has been
posted there yesterday.

-- 
Sébastien Bonnet
  Ingénieur d'exploitation
  Centre de contacts - Experian France


<Prev in Thread] Current Thread [Next in Thread>