Ok, I'll give you what I have. I'm working on a script to do this all
automatically, it's not done, but should be done enough for you to understand
what I have going on. Here's the script I'm using to configure my cluster.
I'm running it on each unit. vars/cluster is the configuration file for the
cluster on each unit. rc.cluster is the script that brings the cluster up. I
hope outlook doesn't mung the script up too much.
The Technique: Use fwmark instead of supplying an ip address/port pair to
direct packets into
lvs. That way we can make the backups ignore incomming connections until they
are promoted to master.
-- BEGIN -- vars/cluster on Unit 1
export CLUSTERING="true"
export CLUSTERVIP="10.0.0.3"
export CLUSTERDEV="eth1"
export CLUSTERMYIP="10.0.0.1"
# CLUSTERRS is the ip of the other box in the cluster
# I'll be loading it from a different config file
# eventually, with all the other members
export CLUSTERRS="10.1.2.99"
-- END --
-- BEGIN -- vars/cluster on Unit 2
export CLUSTERING="true"
export CLUSTERVIP="10.0.0.3"
export CLUSTERDEV="eth1"
export CLUSTERMYIP="10.1.2.99"
export CLUSTERRS="10.0.0.1"
-- END --
-- BEGIN -- rc.cluster
#!/bin/bash
. /hda1/etc/init.d/vars/cluster
if [ "$CLUSTERING" != "true" ]; then
exit
fi
# TODO: Automate this stuff:
SERVERS=2
#CLUSTERRS
iptables -P INPUT ACCEPT
# End of Todo
# Every unit has the VIP on lo, even the Director
ip addr add $CLUSTERVIP/32 dev lo
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
# Use DR as a realserver, but give it less and less
# weight depending on number of members of cluster
MASTERWEIGHT=1
let BACKUPWEIGHT=1*$SERVERS
HOST=`hostname`
cat << EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
test@xxxxxxxxxxxxxx
}
notification_email_from cluster@${HOST}
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state BACKUP
interface $CLUSTERDEV
virtual_router_id 51
priority 100
mcast_src_ip $CLUSTERMYIP
advert_int 1
nopreempt
notify_master "/sbin/iptables -t mangle -I PREROUTING -i $CLUSTERDEV -p tcp
-d $CLUSTERVIP --dport 8080 -j MARK --set-mark 1
notify_backup "/sbin/iptables -t mangle -D PREROUTING -i $CLUSTERDEV -p tcp
-d $CLUSTERVIP --dport 8080 -j MARK --set-mark 1
# notify fault is when a realserver drops out right?
#notify_fault
authentication {
auth_type PASS
auth_pass cipafilter
}
virtual_ipaddress {
$CLUSTERVIP
}
}
# I trigger on fwmark so that I can get the backups to ignore incoming
connections
virtual_server fwmark 1 {
delay_loop 6
lb_algo wlc
lb_kind DR
persistence_timeout 600
protocol TCP
ha_suspend
real_server $CLUSTERMYIP 8080 {
weight $MASTERWEIGHT
HTTP_GET {
url {
path /contentfilter-test.html
digest bb66b49b5145af76d3d7b71d61370d54
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server $CLUSTERRS 8080 {
weight $BACKUPWEIGHT
HTTP_GET {
url {
path /contentfilter-test.html
digest bb66b49b5145af76d3d7b71d61370d54
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
EOF
keepalived -D
-----Original Message-----
From: Joseph Mack NA3T [mailto:jmack@xxxxxxxx]
Sent: Wed 10/24/2007 8:35 AM
To: David Hinkle
Cc: Graeme Fowler
Subject: off-list Re: [lvs-users] FW: Self Configuring High Availability LVS
with Keepalived
On Tue, 23 Oct 2007, David Hinkle wrote:
> Eventually I did get this working today.
Hi David,
I've been hoping that someone would do this. However
I don't understand what you've done from this posting. Can
you write up something so I can put your stuff in the HOWTO,
eg the commands you run on each machine and a bit of an
explanation of what you're doing. Just enough that I
understand it, so that I can write it up. It doesn't have to
be any longer than that.
>I keep the VIP on every lo in the cluster. Every machine
>in the cluster starts as a backup director. One of them
>will get elected as Master director by keepalived and all
>the rest will receive syncronization broadcasts from the
>master director.
OK
I don't know how you change the ipvsadm table to exclude the
new director and the new backup director from the list of
realservers.
> The trick was, I used netfilter Marks to determine what
> traffic to load balance,
can you give me the commands you run.
> and I configured keepalived to
> add the netfilter rules to iptables when it becomes a
> master and remove them when it's a backup.
I don't have iptables rules on my lvs's. Why do you need
them and how do you set them up?
> This way, LVS on the backups does not try to load balance
> incoming connections a second time, and they can just act
> as real servers.
presumably I'll understand this when I get more info.
> This gives me a cluster where all machines are peers.
> One of them just happens to be the director at any given
> time.
ditto
> I'm going to use this to provide 1 click clustering for
> our CIPAFilter products. They already have the ability to
> detect each other via broadcasts, so after a little more
> glue tommorow, a customer with load problems will be able
> to slap a few more cipafilters into their rack, click one
> checkbox, and have a fully load balanced high availability
> cluster.
I found you with google.
> Great work on this stuff guys. Honestly, I had no idea it
> was so powerful or I would have implemented it years ago.
glad you like it
Joe
--
Joseph Mack NA3T EME(B,D), FM05lw North Carolina
jmack (at) wm7d (dot) net - azimuthal equidistant map
generator at http://www.wm7d.net/azproj.shtml
Homepage http://www.austintek.com/ It's GNU/Linux!
|