Re: Persistence and source port of connections

To: " users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: Persistence and source port of connections
From: "Karl Kopper" <karl@xxxxxxxxxxxxxxx>
Date: Thu, 22 Jan 2004 16:59:19 -0800
> My problem is somewhat similar, but wierd :
> I use ldirector master/backuop on the real server with wlc ,and direct
> route :
> IP Virtual Server version 1.0.6 (size=4096)
> Prot LocalAddress:Port Scheduler Flags
>   -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
> UDP wlc persistent 300
> TCP wlc persistent 300
>   ->             Local   1      0          0
>   ->             Route   1      0          0
> TCP wlc persistent 300
>   ->             Local   1      0          0
>   ->             Route   1      0          0
> My problem is that the application that listens to the talarian-tcp port
> (5101) also registeres the destination socket. It always expectes to
> talk to the same socket.
> What happenes is, that after TCP connection time out , when the client
> is accessing the cluster port , if the connection was on the backup
> ldirector server (which is also a real server) ,then lvs will create a
> new connection , meaning a new socket (port changes). That causes the
> application to hang , because it knows about the expired connection's
> port and not the new one.

Are you talking about the expired connection that was originally created on
the primary Director?

>This would have seen normal behaviour to me,
> except for the fact that on the master ldirector node  the behaviour is
> different : The connection stays alive although TCP timeouts have passed
> and the cluster no longer sees the connection. When the client sends
> data over the 'dead' connection, it comes alive on lvs with the same
> port as before (after all, the server still keeps the connection and lvs
> is using that).
> My question is : is there a way to make the backup ldirector/real server
> act the same as the master ldirector/real server  so that the
> application will not hang due to socket changes afer timeout ?
> Increasing TCP timeout to a large value (i.e : ipvsadm --set 43200 0 0)
> could be prioblematic solution, as I need very long timeouts.

At failover time the open sockets on the backup Director may survive when
the backup Director aquires the VIP (of course the localnode connections to
the primary Directory are dropped anyway), but that's not going to happen at
failback time automatically. You may be able to rig something up with
ipvsadm using the --start-daemon master/backup but it is not supported
"out-of-the-box" with Heartbeat+ldirectord. (I think this might be easier on
the 2.6 kernel btw). Perhaps what you want to acheive is only possible with
dedicated Directors not using LocalNode mode.


<Prev in Thread] Current Thread [Next in Thread>