LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [lvs-users] Glusterfs native nfs port for LVS

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: [lvs-users] Glusterfs native nfs port for LVS
From: Filipe Cifali <cifali.filipe@xxxxxxxxx>
Date: Sat, 8 Mar 2014 14:21:05 -0300
I don't see a reason to put GlusterFS over LVS, since the client already
loadbalance through the servers.

How you mounted the GlusterFS on the client side?


On Sat, Mar 8, 2014 at 10:36 AM, David Coulson <david@xxxxxxxxxxxxxxxx>wrote:

>
> On 3/8/14, 8:31 AM, yang feng电话 wrote:
> > Tomasz,
> > Thanks for your help.
> >
> > I have three gluster nodes, it seems rpc.statd is random port....
> > What I want is to let LVS Loadbalance among those three nodes.
> So why not just make it simple and ignore any tcp/udp ports, and make
> any connection to your VIP forward to the gluster server on the backend.
> Add persistence so connections to different ports from the same client
> IP end up on the same gluster server.
>
> With ipvsadm you can use fwm to manage the rules, then use iptables to
> set the mark in the mangle table. Not sure how your systems are
> configured since you referenced a LVS configuration file, but it is
> doable with the base tools.
>
> _______________________________________________
> Please read the documentation before posting - it's available at:
> http://www.linuxvirtualserver.org/
>
> LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
> Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
> or go to http://lists.graemef.net/mailman/listinfo/lvs-users
>



-- 
[ ]'s

Filipe Cifali Stangler
_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users
<Prev in Thread] Current Thread [Next in Thread>