LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

connecting through the lvs twice - failover fails

To: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Subject: connecting through the lvs twice - failover fails
From: "Sebastian Vieira" <sebvieira@xxxxxxxxx>
Date: Thu, 27 Jul 2006 14:45:57 +0200
Hi,

I've setup LVS-NAT for telnet sessions and LVS-DR for rdp (ms terminal
server). Now i have these connections:

[client] -- [lvs] -- [rdp server] -- [lvs] -- [telnet server]

Note that both lvs configurations are on the same physical box, using
heartbeat+ldirectord.  This is my ldirectord.cf:

-- <ldirectord.cf> --
# Global
logfile="local0"
callback="/etc/ha.d/scpsync"
autoreload = yes

# telnet
virtual = 192.168.50.106:23
       protocol=tcp
       scheduler=wlc
       checkport=23
       checktype=connect
       real= 192.168.14.20:23 masq 1
       real=192.168.14.21:23 masq 1
       real=192.168.14.22:23 masq 1
       real= 192.168.14.23:23 masq 1
       real=192.168.14.24:23 masq 1
       real=192.168.14.25:23 masq 1
       real= 192.168.14.26:23 masq 1
       real=192.168.14.27:23 masq 1
       real=192.168.14.13:23 masq 1
       real= 192.168.14.29:23 masq 1
       real=192.168.14.31:23 masq 1
       real=192.168.14.33:23 masq 1
       real= 192.168.14.35:23 masq 1

# RDP Terminal Servers
virtual = 192.168.50.104:3389
       protocol=tcp
       scheduler=wlc
       checkport=3389
       checktype=connect
       real=192.168.50.14:3389 gate 1
       real=192.168.50.18:3389 gate 1
       persistent=43200

# Citrix Terminal Servers
virtual = 192.168.50.105:1494
       protocol=tcp
       scheduler=wlc
       checkport=1494
       checktype=connect
       real= 192.168.50.121:1494 gate 1
       real=192.168.50.122:1494 gate 1
       persistent=43200

# Local SSH
virtual = 192.168.50.103:22
       protocol = tcp
       scheduler = wlc
       real=127.0.0.1:22 gate 1
-- </ldirectord.cf> --

This is my ha.cf :

-- < ha.cf> --
udpport         696
logfacility     local0
keepalive       50ms
deadtime        4
warntime        1
initdead        120
bcast   eth2
auto_failback   off
node    rpzlvs03 rpzlvs04
-- </ha.cf> --

Now i do a forced failover (stop heartbeat process) on the active node and
then things get 'weird'  (at least, for me). Usually the telnet connection
stays up, but the rdp connection drops. Problem is that when a telnet
session is started from within the rdp session, the user gets back into the
rdp session, but the login has to be kicked off the telnet server manually.

My question:  is this a workable setup, or is it doomed to fail?  We're on a
tight budget, so i can't really afford to have 2 more boxes for a seperate
telnet-lvs and rdp-lvs, but if i have no other choice ....   What other
measures could i take to keep things running as smooth as possible?

Btw, sync daemons are running on both nodes. Both master and backup daemon
gets started. Master on node 1 gets syncid 50, backup on node 1 gets 51. On
the backup node it's the other way around. They (sync daemons) broadcast
over the same nic as heartbeat sends its heartbeat signals.

Thanks in advance,


Sebastian

<Prev in Thread] Current Thread [Next in Thread>