Greetings,
so finally together with Novell the solution was found. I had the ldirectord
ressource in the cluster as a standard resource. You need to configure it as a
clone resource, so that the ldirectord runs on both nodes. This solves number 1
of my problem.
It seems that there is no solution for number 2.
kr patrik
----------------------------------------
> From: patrik.r@xxxxxxxxxxx
> To: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
> Date: Wed, 19 Dec 2012 13:28:18 +0100
> Subject: [lvs-users] session table is ignored in active/passive cluster setup
> after failover?
>
> Greetings,
>
> I am using SLES 11 SP2 with ipvsadm-1.26-5.7.14 and ldirectord-3.9.2-0.25.5.
> My requiremnts for the load balancing are the following:
>
> *Clients should have a persistent connection to the real server for 24 hours.
> *Scheduling should be done over round robin.
> *If the active node fails, the passive node should take over and also keep
> the persistency session records. When the failed node recovers, it should
> also have all the
> persistency session records.
> *A dead server should be removed from the real servers list and be readded
> again after recovery.
>
> I have setup a test environment which looks like this:
>
> High availabilty two node active/passive cluster.
> Boths nodes start ipvs in the following way, where bond0 is a bond interface,
> which connects both servers directly with each other:
>
> ipvsadm --start-daemon master --mcast-interface bond0
> ipvsadm --start-daemon backup --mcast-interface bond0
>
> For testing purposes I use http, but in future it should be a self written
> service. The ldirectord.cf looks like this:
>
> #GLOBAL CONFIGURATION#
> autoreload = yes
> checkinterval = 30
> checktimeout = 5
> quiescent = no
> logfile = "/var/log/ldirectord.log"
>
> virtual = 172.20.150.34:80
> protocol = tcp
> real = 172.20.150.33:80 gate 1
> real = 172.20.150.16:80 gate 1
> real = 172.20.150.50:80 gate 1
> scheduler = rr
> persistent=86400
> checktype=connect
>
> Real Servers have:
> ifconfig lo:0 172.20.150.34 netmask 255.255.255.255 broadcast 172.20.150.34
> route add -host 172.20.150.34 dev lo:0
> echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore
> echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce
>
> Following things I could figure out while testing:
>
> *In first glance, the synchronisation works fine, both nodes have the same
> session table. After I reboot the active node, the for me problematic
> behaviour starts:
>
> 1. The existing records in the session table for the 24h persistence are
> ignored and clients will be balanced over round robin again.
> This ends up in strange records in the session table, where the real server
> destination port is 65535 instead of 80. See:
>
> TCP 1322:38 NONE 172.20.150.25:0 172.20.150.34:http
> 172.20.150.16:http
> TCP 1313:54 NONE 172.20.150.25:0 172.20.150.34:65535
> 172.20.150.60:65535
>
> 2. Only new sessions will be synchronized to the recoverd server. (now the
> passive server in the a/p setup)
>
> Is this maybe a configuration problem? What can I do to solve this?
>
> Many thanks in advance for your help.
>
> kr Patrik
> _______________________________________________
> Please read the documentation before posting - it's available at:
> http://www.linuxvirtualserver.org/
>
> LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
> Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
> or go to http://lists.graemef.net/mailman/listinfo/lvs-users
>
_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users
|