We recently moved to a new data center and both of our load balancers
are now exhibiting some strange behavior. They each start out fine for
about 30 minutes, balancing the number of connections (using wlc) and
everything is good. But if I monitor the Active and Inactive connections
I can see the number of both slowly start to drop until both are zero.
Now it's still sending connections to the web servers, but it doesn't
seem to be keeping track of them. And even though the load is relatively
equal among both our web servers, after this point it sends connections
to only one of the servers, and the other one is essentially idle. Any
ideas about what is happening would be greatly appreciated. I've
included my lvs.cf below:
serial_no = 282
primary = 208.66.46.160
primary_private = 10.200.200.1
service = lvs
backup_active = 1
backup = 208.66.46.161
backup_private = 10.200.200.2
heartbeat = 1
heartbeat_port = 539
keepalive = 6
deadtime = 18
network = direct
debug_level = NONE
rsh_command = ssh
monitor_links = 1
virtual www.example.com {
active = 1
address = 208.66.46.130 eth1:0
vip_nmask = 255.255.255.128
port = 80
persistent = 72000
expect = "Server is UP [ok]"
use_regex = 1
send_program = "/usr/local/bin/local_webcheck %h"
load_monitor = uptime
scheduler = wlc
protocol = tcp
timeout = 6
reentry = 15
quiesce_server = 0
server web1 {
address = 10.200.200.10
active = 1
weight = 5
}
server web2 {
address = 10.200.200.11
active = 1
weight = 5
}
}
_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users
|