> How LVS participates in this ping? This looks like a network
> problem.
>
You were quite correct in this.
It seems the problem is in the network init scripts. What APPEARS to be
happening is that, on reboot, not enough time is passing between when eth0's
config and the default route get configured, and when the additional
10.10.10.0 route is configured. I had the same problem with apache for a bit
until we added a 'sleep 3;' to the restart section of the
/etc/rc.d/init.d/httpd script. I'm thinking the same is needed here. If I do
NOT specify the extra route for the 10.10.10.0 network in the init scripts
and instead place the full 'route' configuration statement in the
/etc/rc.d/rc.local file it works perfectly every time. (This takes place on
the router).
I _do_, however, have a question regarding routing from the web servers that
may or may not affect the packets managed under the auspices of the lvs
daemon.
OK, right now the web servers have to feed all information they send or
receive through the backend VIP on the front end node(s). To lessen the
amount of traffic passing through the 192.168.1.254 backend VIP I would like
to place route entries into the webfarm members' routing tables. AKA, I want
to send all the 10.10.10.0 traffic that the webservers generate while
talking to the backend request processors (on the 10.10.10.0 network)
through the 192.168.1.250 interface on the Linux router box.
Normally this would be as simple as adding a route -net 10.10.10.10 netmask
255.255.255.0 gw 192.168.1.250 command to the web servers' startup scripts.
I have not added this command yet as these are production servers that I can
not have down even for a momentary goof, such as my belief that this should
NOT affect things turns out to be wrong.
Will adding the extra routing statement to the web farm members cause
routing issues with the LVS?
If so what, and is there a way, convoluted or not, to allow this?
Basically what's happening is that everything needed to generate the
information returned to the requestor coming in on the front-end VIP is
generated on the 10.10.10.0 network, fed to the webfarm member requesting
the info and then repackaged for delivery to the outside requestor. Right
now all this traffic passes through the 192.168.1.254 backend VIP. We don't
want that.
Any help would be appreciated.
---
DAVID D.W. DOWNEY SENIOR LINUX SYSTEMS ADMINISTRATOR
QIXO, Inc. Home Page =
http://QIXO.com
2192 Fortune Drive Company Email =
ddowney@xxxxxxxx
San Jose, CA 95131 Company Phone = +1
(408) 514-6441
USA Company FAX =
+1 (408) 516-9090
---
|