Hello,
On Thu, 16 Mar 2000, Joseph Mack wrote:
>
>
> It works (tested on VS-DR)!
Thanks for the results. We need to ask a net guru about
the consequences but we still can test it for a some period of time.
>
> I'm impressed you could write working code without being able to test it.
>
> setup
>
> ____________
> | |
> | client |
> |____________|
> |
> | 192.168.2.0/24
> _____|______
> | |
> | director | VS-DR director has 2 NICs
> |____________|
> | eth0 192.168.1.9
> | eth0:12 192.168.1.1
> |
> | 192.168.1.0/24
> ______|____________________
> |
> |
> _____|_______
> | |
> |realserver(s)| default gw=192.168.1.1
> |_____________|
>
> 192.168.1.1 is the normal router. For the test it was put on the
> director as an alias. The director has 2 NICs, with forwarding=on
> (client and realservers can ping each other).
>
> Director runs linux-0.9.8-2.2.15pre9 unpatched or with Julian's
> patch. Note: I didn't have Julian's e-mail with me so I
> didn't know which flags to set in /proc. They are in whatever
> state they are after the kernel compile.
For the test you don't need to touch anything. But we opened
a security hole, so everyone must be careful in production.
>
> LVS is setup using the configure script in the HOWTO,
> redirecting telnet, with rr scheduling to 3 realservers.
> The realservers were running 2.0.36 (1) or 2.2.14 (2). The arp problem
> was handled for the 2.2.14 realservers by permanently
> installing in the client's arp table, the MAC address
> of the NIC on the outside of the director, using
> the command `arp -f /etc/ethers`
>
> The director was booted 4 times, into unpatched, patched, unpatched and
> patched. After each reboot the lvs scripts were run on the director and
> the realservers, then the functioning of the LVS tested by telnet'ing
> multiple times from the client to the VIP.
>
> For the unpatched kernel, the client connection hung and inactive connections
> acccumulated for each realserver. For the patched kernel, the client
> telnet'ed to the VIP connecting with each realserver in turn.
>
> Interestingly, the setup scripts had to be re-run on the 2.2.14 realservers
> (even though they had not been rebooted) after the director was rebooted
What do you setup in the real servers that is not running? And
how the scripts in the real servers affect the Director?
> before these realservers could be accessed as part of the LVS. I expected
> that they would continue to function through a director reboot.
The LVS entries are lost. So, all established connections
just freeze. But the new connections must go.
> The 2.0.36 realserver continued to function as a realserver after the
> director was rebooted, without having to re-run any scripts. I haven't
> checked the cause of this problem.
Very interesting. But I don't know what do you set up in
the real servers. You have to check it again.
I'm still interested if at the same time while the cluster is
flooded (through the VIP) the client can successfully flood the Director,
i.e. by flooding the DIP. This can be achieved by using the Director as
real server too. I'm still not sure about all consequences for the routing
cache in the Director. More clients are better too.
Regards
--
Julian Anastasov <uli@xxxxxxxxxxxxxxxxxxxxxx>
|