Hello Francisco:
Squid is not supposed to work in transparent mode (i.e. each browser
sould have configured the VIP assigned to the proxy).
And yes: the port is not really important (we use 8080).
Just talking about "the arp issue": do you use any arp filter like
arptables (arptables_jf).
Are there additional warnings that should be considered that (for any
reason) are not in the howto's?
I think that, perhaps, my problem has to do with this topic (ARP).
So I tryed:
* net.ipv4.conf.(eth*).arp_ignore = 1
* net.ipv4.conf.(eth*).arp_announce = 2
(and then sysctl -p)
I didn't used arptables (like said at UltraMonkey's site), and then
configured /etc/ha.d/ha.cf and /etc/ha.d/haresources (and authkeys too).
After starting heartbeat, you can see:
# ipvsadm -L
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP prx:webcache dh persistent 300
-> prx01:webcache Local 100 0 0
-> prx02:webcache Route 100 0 0
#
You can ping to "prx's" IP address, and no mac address entry is
displayed issuing an "arp -a" for host "prx", but:
- you can see prx01's mac address issuing "arp -a" at prx02, and
- you can see prx02's mac address issuing "arp -a" at prx01.
(!) I think it's fine (isn't it?).
After that, I can see connections (active or inact) only to one of the
nodes (mostly prx02) and when you make prx02 "fail", connections are not
established to prx01 (and this is my problem...)
Thanks again
Regards
Ignacio
> -----Mensaje original-----
> De: lvs-users-bounces@xxxxxxxxxxxxxxxxxxxxxx
> [mailto:lvs-users-bounces@xxxxxxxxxxxxxxxxxxxxxx] En nombre
> de Francisco Gimeno
> Enviado el: Jueves, 22 de Junio de 2006 15:52
> Para: LinuxVirtualServer.org users mailing list.
> Asunto: Re: A running configuration for a Squid LVS
>
>
> Hello,
>
> are you capturing the traffic? ( ie: transparent mode ) or do
> you have to
> configure the IP in the brownsers?
>
> For the first option, you have to use the fwmark thing, that
> I didn't got
> work to balance a service living in the same box.
>
> For the second option, I have several working implementations
> ( maybe not for
> 3128 port, but 80... really, no difference ). Using
> ldirectord + heartbeat.
> It was quite easy. In the RedHat kernels, Fedora, or
> whatever, I did have
> some caveats on the ARP thing.
>
> BR,
> Francisco Gimeno
> -----------------
> > Good evening:
> > I decided to write to this list in the hope that somebody
> could help
> > me determine where are my (of course several) mistakes.
> >
> > For a couple of weeks I've been trying to have working a 2
> node LVS-DR
> > "cluster" under a HA schema (ie. "active/active"
> configuration). This
> > cluster serves a Squid web cache "resource" (that works
> fine on each
> > node).
> >
> > I've used heartbeat + ldirectord software (2.0.5 versions) under
> > Fedora Core 4, and didn't touch the provided kernel (it's as
> > installed), and besides that I tryed to imitate the configuration
> > steps as stated in
> > http://www.ultramonkey.org/3/topologies/sl-ha-lb-eg.html
> >
> > What I can see is that failover messages are interchanged between 2
> > nodes (for example when I stop heartbeat in one node).
> After that with
> > ipvsadm I can see that the node supposed to be down has
> weight=0 (it's
> > ok). But when I try to use Squid through the VIP, I cannot
> access any
> > site in the web (connections are dropped?).
> > And once I resume heartbeat in the "failed" node,
> everything works fine
> > again.
> >
> > Does anyone else has configured an lvs for use with Squid? Can you
> > share it?
> >
> > Perhaps I'm not using the right software for this... can
> you suggest a
> > better software "combo"? (I mean other than ldirectord)
> >
> > TIA
> >
> > Best regards
> >
> > Ignacio
> _______________________________________________
> LinuxVirtualServer.org mailing list -
> lvs-users@xxxxxxxxxxxxxxxxxxxxxx Send requests to
> lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
> or go to http://www.in-addr.de/mailman/listinfo/lvs-users
>
|