Here is how I use bonding:
$node = IPVS server
$real = real server
$fip = front end ip
$fvip = front end vip
$bip = back end ip
$bvip = back end vip
1. $node has eth1 and eth3 bonded together on $fip as bond0
2. $fvip sits on bond0:0 and accepts all incoming requests for load
balancing
3. $node has eth0 and eth2 bonded together on $bip as bond1
4. $bvip sits on bond1:0 and talks to all real servers where each $real
has a gateway of $bvip
FYI: $node is also in failover cluster #1 and all real servers are in load
balanced cluster #2
Brad Hudson
-----Original Message-----
From: lvs-users-bounces@xxxxxxxxxxxxxxxxxxxxxx
[mailto:lvs-users-bounces@xxxxxxxxxxxxxxxxxxxxxx] On Behalf Of Craig Ward
Sent: Saturday, November 05, 2005 5:21 AM
To: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Subject: Bonded network cards and LVS
Hi,
Are there any known issues with bonding NICs and LVS?
I've got a setup with 4 boxes, all 4 are web servers and 2 are directors.
The VIP is being brought up on bond0:0 fine, but I can't ping this
from any machine and its not showing in the arp table on my windows
client. Strangely, if I manually bring up an ip on bond0:0 that is NOT
the the VIP I can ping it fine.
I'm wondering if any of the noarp rules on the each director, used for
when they are slave directors, is somehow "stuck" and it's not arping
for the VIP whatever interface it's on?
Any ideas welcome.
Many Thanks,
Craig.
|