LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Finally, I got HP/UX-11.00 working with LVS-DR!!!

To: "lvs-users@xxxxxxxxxxxxxxxxxxxxxx" <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Finally, I got HP/UX-11.00 working with LVS-DR!!!
Cc: Joseph Mack <mack@xxxxxxxxxxx>, Wensong Zhang <wensong@xxxxxxxxxxxx>, Michael Sparks <michael.sparks@xxxxxxxxx>
From: Ratz <ratz@xxxxxx>
Date: Wed, 03 Nov 1999 22:22:22 +0100
Hi guys

I apologize to all for not having written back earlier, but I got a
terrible flue which putted me deep into my bed. But I'm back and with
some news.

First of all: I managed to loadbalance HP/UX 11.00 with LVS-DR (0.9.2).
The solution is not to try configuring the loopback alias but to define
an alias of a NIC. So, let's say you have a NIC named lan0, set up:

ifconfig lan0:1 VIP up -arp      ## just leave default netmask

This is different from Solaris, Linux and FreeBSD. Because you can set
the arp flag for each virtual device seperately. So logical and virtual
interfaces don't interfere. I'm not sure why exactly the solution with
the virtual loopback didn't work, because I didn't have a tcpdump (only
nettl :(( ), but as far as I could see, a connection was established
(SYN-ACK), but as soon as the HP-machine had to deliver data, it hanged
(SYN-STORM). I also noticed that masquerading timeout for persistancy
was 15 minutes (normal 6 minutes), and that there were serveral entries
with a huge timeout (some years). I didn't have the time to make
additional tests, but we decided to rereplace the Alteon Loadbalancer
Switch with the LVS-DR, like I promised, I want give up.

About the ICMP-Problem (concernes Joseph):
1. After that new solution, I didn't get any icmp_redirects anymore and
sadly I cannot reproduce this anymore. I think this was due to a
configuration error. Because first, on the firewall the http-proxy
wasn't transparent and so all clients came from one IP. Second, we made
some redirection on our firewall and so the whole environment was not
really clear.

2. Up to now I cannot give you an explanation for the causes.

3. The only cure I know is to disable icmp_redirect in
/proc/sys/net/ipv4/conf/ with the good old: echo 0 >
/proc/sys/net/ipv4/conf/default/send_redirects. Perhaps you have to
disable all icmp_redirects in /proc/sys/net/ipv4/conf/. It depends, but
AFAIK, disabling the ../conf/default/send_redirects doesn't change the
behavior of NIC-specific icmp_redirects in ../conf/ethX/.. . They are ON
by default on a system boot.

4. The problem occured with LVS-DR 0.9.0 on 2.2.x kernel. 

Could you please check point 6.2 of your LVS-HOWTO? For me it seems,
that you made some typos...

Something about FreeBSD(concerning Michael):
I don't know what exactly your problem is, but for me the following
setup worked perfectly:

ifconfig lo0 alias VIP netmask 0xffffffff up -arp

Beware: This also sets the root (logical) loopback into noarp mode! But
it worked for me with LVS-DR v.0.9.2 & FreeBSD 3.2, 3.3 (I made a
mistake, I didn't test FreeBSD 3.1).

Now I would like to make a proposal (Mostly concerning Wensong).
During work I wrote a script which does the setup out of one configfile
and then performs healthchecks (ping, html so far). It automatically
takes out a server which is down, but ... If you got persistant bindings
you have this static timeout value of 360 seconds, which can be changed
on setup, but logically has no use if masq entry is already in use. So
for normal use 360 seconds is great, but if a server is down, I suddenly
want to zero the timer. Perhaps I've overseen a new functionality, but I
would like to have a command to flush the masq time entry (ipchains -L
-M -vnx) of a dead server. Like ipvsadm -F IP! I can't do it with
ipchains -M -S # # #! I need an advice. 

So, I hope I didn't forget to mention.

Yours sincerely


Roberto Nibali (ratz)

----------------------------------------------------------------------
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
To unsubscribe, e-mail: lvs-users-unsubscribe@xxxxxxxxxxxxxxxxxxxxxx
For additional commands, e-mail: lvs-users-help@xxxxxxxxxxxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>