LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: LVS can run on 2 server only?

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: LVS can run on 2 server only?
From: "Eric Chan" <eric.chan@xxxxxxxxxxxxxxxxx>
Date: Wed, 8 Dec 2004 15:33:15 +0800
Hi,
Thanks Francois JEANMOUGIN for the setup link.

http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.localnode.html#two_box

I have followed the link but is not successful to run LVS on 2 server only


My configuration is as below

####server1 IP (eprhut1)#######
192.168.0.29

####server 2 IP (eprhut2)#######
192.168.0.30

####ifcfg-lo:0 with /etc/sysconfig/network-scripts on server 1###### (Do I
need to create this file on both server ? )
DEVICE=lo:0
IPADDR=192.168.0.31
NETMASK=255.255.255.255
NETWORK=192.168.0.0
BROADCAST=192.168.0.255
ONBOOT=YES
NAME=loopback


####ldirectord.cf##############
#Global Directives
checktimeout=10
checkinterval=2
#fallback=127.0.0.1:80
autoreload=no
logfile="/var/log/ldirectord.log"
quiescent=yes

# Virtual Server for HTTP
virtual=192.168.0.31:9083
        #fallback=127.0.0.1:80
        real=192.168.0.29:9083 gate
        real=192.168.0.30:9083 gate
        service=http
        request="/mms/AliveServlet"
        receive="abc"
        scheduler=rr
        #persistent=600
        protocol=tcp
        checktype=negotiate


####haresources file one both servers#####

eprhut1 IPaddr::192.168.0.31 ldirectord::ldirectord.cf


####ha.cf on both server ###################################
debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility     local0
keepalive 2
deadtime 10
warntime 10
initdead 10
nice_failback on
mcast eth0 225.0.0.7 694 1 1
node    eprhut1
node    eprhut2


###/var/log/ha-log on server1 after heartbeat start
up###############################

heartbeat: 2004/11/10_14:57:42 info: Configuration validated. Starting
heartbeat
 1.0.4
heartbeat: 2004/11/10_14:57:42 info: nice_failback is in effect.
heartbeat: 2004/11/10_14:57:42 info: heartbeat: version 1.0.4
heartbeat: 2004/11/10_14:57:42 info: Heartbeat generation: 14
heartbeat: 2004/11/10_14:57:42 info: UDP multicast heartbeat started for
group 2
25.0.0.7 port 694 interface eth0 (ttl=1 loop=1)
heartbeat: 2004/11/10_14:57:43 info: pid 2692 locked in memory.
heartbeat: 2004/11/10_14:57:43 info: pid 2693 locked in memory.
heartbeat: 2004/11/10_14:57:43 info: pid 2694 locked in memory.
heartbeat: 2004/11/10_14:57:43 info: Local status now set to: 'up'
heartbeat: 2004/11/10_14:57:43 ERROR: FIFO open failed.: Interrupted system
call



ping 192.168.0.31 has no response from my client.

Any suggestion ? Thanks a lot.

Best regards,
Eric


----- Original Message ----- 
From: "Francois JEANMOUGIN" <Francois.JEANMOUGIN@xxxxxxxxxxxxxxxxx>
To: "LinuxVirtualServer.org users mailing list."
<lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Sent: Tuesday, December 07, 2004 4:37 PM
Subject: RE: LVS can run on 2 server only?




> We have only 2 linux server, however we want to use the LVS load balance
> feature. Is it possible for one server to play both the role of director
> and real server?  (P.S. Each linux server has 2 ethernet ports). If it is
> possible, how should we configure the server?

http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.localnode.html#two_box
_lvs

François.

Décharge / Disclaimer

Ce message et toutes les pièces jointes (ci-après le "message") sont
confidentiels et établis à l'intention exclusive des destinataires. Toute
utilisation ou diffusion non autorisée est interdite. Tout message
électronique étant susceptible d'altération, 123Multimédia et ses filiales
déclinent toute responsabilité au titre de ce message s'il a été altéré,
déformé ou falsifié.

This message and any attachments (the "message") are confidential and
intended solely for the addressees. Any unauthorised use or dissemination is
prohibited. E-mails are susceptible to alteration. Therefore neither
123Multimédia nor any of its subsidiaries or affiliates shall be liable for
the message if altered, changed or falsified.




<Prev in Thread] Current Thread [Next in Thread>