Hi Stephane,
When starting 1 LVS keepalived process, you have 1 server broadcasting
VRRP packets.
Those packets must be received by the second LVS server (regardless of
whether keepalived is running) since they are broadcast packets.
If connectivity is not working (switch filtering broadcast traffic? Some
other filtering device between the 2 servers?), keepalived will not work.
This has nothing to do with keepalived, but with filtering that happens
after the packet has been sent.
About the ROUTER_ID:
The man page of keepalived.conf says (lines omitted) :
Global definitions
global_defs # Block id
{
router_id my_hostname # string identifying the machine,
# (doesn't have to be hostname).
}
I would say that router_id must be unique if it's used to identify
something. It won't hurt you to make it unique.
It's unique in my production environment.
With kind regards,
Tom van Leeuwen
On 07/08/2011 10:58 AM, Stephane RIOS wrote:
> Hi Tom
>
> Thank you for your reply !
>
> Le 08/07/11 07:14, Tom van Leeuwen a écrit :
>> Hi Stephane,
>>
>> Probably stupid, but: have you also checked that VRRP packets are
>> recieved correctly?
> Yep. Packets are sent but not received, this is the point.
>> If you start 1 keepalived, does the other see the vrrp? Also: have you
>> specified different (global def section) router_id's?
> No. Should I do this ?
>> Also, can you verify that you can ping LVS node 2 from LVS node 1 and
>> vice versa. Also verify that it really goes in/out of interface eth0.
> Everything fine :
>
> lvs1:~# ping -I eth0 31.222.176.201
> PING 31.222.176.201 (31.222.176.201) from 31.222.176.203 eth0: 56(84)
> bytes of data.
> 64 bytes from 31.222.176.201: icmp_seq=1 ttl=64 time=0.264 ms
> 64 bytes from 31.222.176.201: icmp_seq=2 ttl=64 time=0.261 ms
>
> and
>
> lvs2:~# ping -I eth0 31.222.176.203
> PING 31.222.176.203 (31.222.176.203) from 31.222.176.201 eth0: 56(84)
> bytes of data.
> 64 bytes from 31.222.176.203: icmp_seq=1 ttl=64 time=19.1 ms
> 64 bytes from 31.222.176.203: icmp_seq=2 ttl=64 time=0.324 ms
>
> Oh, and I forgot to give an extract of my /etc/sysctl.conf :
>
> # Allow binding to VIP
> net.ipv4.ip_nonlocal_bind = 1
>
>
> I'm waiting for a Rackspace support answer about the subnets. I asked
> them to change my VIP to a 31.222.176.xxx one. I'll give you some news
> as soon as I get their answer.
>> With kind regards,
>> Tom van Leeuwen
>>
>> On 07/08/2011 02:22 AM, Stephane RIOS wrote:
>>> Hi all
>>>
>>> This is slightly off-topic because it concerns especially keepalived and
>>> not lvs but i read a lot of posts in this list about keepalived so ...
>>> I've made a very simple setup of 2 LVS nodes (Debian 5.0) on Rackspace.
>>> And this setup does not work.
>>> Here's the MASTER keepalived conf :
>>>
>>> global_defs {
>>> router_id my_router
>>> }
>>>
>>> vrrp_instance app_master {
>>> state MASTER
>>> interface eth0
>>> virtual_router_id 36
>>> priority 150
>>> advert_int 1
>>> nopreempt
>>> garp_master_delay 5
>>> virtual_ipaddress {
>>> 31.222.179.6
>>> }
>>> }
>>>
>>> The BACKUP conf is quite the same except the "state" and "priority"
>>> directives of course.
>>>
>>> Simple, isn't it ?
>>>
>>> The real IPs of the 2 nodes are on the 31.222.176.xxx subnet and the
>>> virtual IP that Rackspace gave me is 31.222.179.6.
>>> It seems that everything is working fine except that VRRP packets are
>>> sent but not received by either one. So immediatly after the keepalived
>>> daemon start, the BACKUP node enters the MASTER state and i have two
>>> nodes in MASTER state (huh !).
>>>
>>> What i've done to check my config :
>>>
>>> - tcpdump -qn net 224.0.0.0/8 : check that the VRRP packets are sent
>>> (OK)
>>> - ip addr list : check that the two nodes got the virtual IP (OK but i
>>> don't want this !)
>>> - netstat -g : check that the vrrp.mcast.net (224.0.0.18) multicat
>>> group is registered on the interface (OK)
>>> - iptables -F : delete all possible rules that could block
>>>
>>> I tried to replicate this config with 2 VMs on my own machine and
>>> everything worked fine. But I assigned real IPs and virtual IP in the
>>> same subnet.
>>> I also tried to add a /22 in the virtual_ipaddress field in the
>>> keepalived.conf but it didn't solve my problem.
>>>
>>> So my question is : does keepalived need to have Virtual IP and real IP
>>> in the same subnet to work ?
>>> Do I need to add a routing directive somewhere ?
>>>
>>> Thank you for your help
>>>
>> _______________________________________________
>> Please read the documentation before posting - it's available at:
>> http://www.linuxvirtualserver.org/
>>
>> LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
>> Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
>> or go to http://lists.graemef.net/mailman/listinfo/lvs-users
_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users
|