Was that your full tcpdump output?
Is your traffic coming in through eth2 on proxy1 and also going out through
eth2 or different interface?
On Mon, May 19, 2014 at 1:20 PM, Sal Munguia <lvs@xxxxxxxxxxxxx> wrote:
> Setup Load Balancer's using Keepalived
> load01 and load02
> ===========================================
> Load01 config:
> [root@vml-load01 ~]# ip addr show
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> inet6 ::1/128 scope host
> valid_lft forever preferred_lft forever
> 2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
> link/ether 08:00:27:6b:b8:c1 brd ff:ff:ff:ff:ff:ff
> inet 10.0.200.50/24 brd 10.0.200.255 scope global eth1
> inet 10.0.200.60/32 brd 10.0.200.60 scope global eth1:0
> inet6 fe80::a00:27ff:fe6b:b8c1/64 scope link
> valid_lft forever preferred_lft forever
>
> ===========================================
>
> manually forced the ipvsadm config:
> IP Virtual Server version 1.2.1 (size=4096)
> Prot LocalAddress:Port Scheduler Flags
> -> RemoteAddress:Port Forward Weight ActiveConn InActConn
> TCP 10.0.200.60:smtp wlc
> -> 10.0.200.52:smtp Route 1 0 0
> TCP 10.0.200.60:http rr persistent 50
> -> 10.0.200.52:http Route 1 0 0
>
> =========================================================
>
> Traffic Traverses:
>
> load01 -> proxy01 -> interface eth2:0
> =============================================================
> Network config on proxy01:
> eth2 Link encap:Ethernet HWaddr 08:00:27:01:B4:79
> inet addr:10.0.200.52 Bcast:10.0.200.255 Mask:255.255.255.0
> inet6 addr: fe80::a00:27ff:fe01:b479/64 Scope:Link
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:221795 errors:0 dropped:0 overruns:0 frame:0
> TX packets:213292 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:19749301 (18.8 MiB) TX bytes:20172223 (19.2 MiB)
>
> eth2:0 Link encap:Ethernet HWaddr 08:00:27:01:B4:79
> inet addr:10.0.200.60 Bcast:10.0.200.60 Mask:255.255.255.255
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
>
>
> ===============================================================
> Dump while trying to connect to port 25:
>
> [root@vml-proxy-1 ~]# tcpdump -i eth2:0 port 25
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on eth2:0, link-type EN10MB (Ethernet), capture size 65535 bytes
>
>
> 15:52:26.179428 IP 10.0.200.53.41345 > 10.0.200.60.smtp: Flags [S], seq
> 868460265, win 14600, options [mss 1460,sackOK,TS val 270492287 ecr
> 0,nop,wscale 5], length 0
>
> ===============================================================
> Setup arptables to prevent arp for VIP from going out:
> # Generated by arptables-save v0.0.8 on Mon May 19 15:54:49 2014
> *filter
> :IN ACCEPT [29741:832748]
> :OUT ACCEPT [27508:770224]
> :FORWARD ACCEPT [0:0]
> -A IN -d 10.0.200.60 -j DROP
> -A OUT -s 10.0.200.60 -j mangle --mangle-ip-s 10.0.200.52
> COMMIT
> # Completed on Mon May 19 15:54:49 2014
>
> ===============================================================
>
>
> Logs for keepalived:
> May 19 16:14:23 vml-load01 Keepalived_healthcheckers[8254]: Opening file
> '/etc/keepalived/keepalived.conf'.
> May 19 16:14:23 vml-load01 Keepalived_healthcheckers[8254]: Configuration
> is using : 16262 Bytes
> May 19 16:14:23 vml-load01 Keepalived_vrrp[8256]: Using LinkWatch kernel
> netlink reflector...
> May 19 16:14:23 vml-load01 Keepalived_vrrp[8256]: VRRP sockpool:
> [ifindex(2), proto(112), fd(10,11)]
> May 19 16:14:23 vml-load01 Keepalived_healthcheckers[8254]: Using
> LinkWatch kernel netlink reflector...
> May 19 16:14:23 vml-load01 Keepalived_healthcheckers[8254]: Activating
> healthchecker for service [10.0.200.52]:80
> May 19 16:14:23 vml-load01 Keepalived_healthcheckers[8254]: Activating
> healthchecker for service [10.0.200.53]:80
> May 19 16:14:24 vml-load01 Keepalived_vrrp[8256]: VRRP_Instance(VI_1)
> Transition to MASTER STATE
> May 19 16:14:24 vml-load01 Keepalived_healthcheckers[8254]: Error
> connecting server [10.0.200.52]:80.
> May 19 16:14:24 vml-load01 Keepalived_healthcheckers[8254]: Removing
> service [10.0.200.52]:80 from VS [10.0.200.60]:80
> May 19 16:14:24 vml-load01 Keepalived_healthcheckers[8254]: Remote SMTP
> server [0.0.0.0]:25 connected.
> May 19 16:14:24 vml-load01 Keepalived_healthcheckers[8254]: Error
> connecting server [10.0.200.53]:80.
> May 19 16:14:24 vml-load01 Keepalived_healthcheckers[8254]: Removing
> service [10.0.200.53]:80 from VS [10.0.200.60]:80
> May 19 16:14:24 vml-load01 Keepalived_healthcheckers[8254]: Lost quorum
> 1-0=1 > 0 for VS [10.0.200.60]:80
> May 19 16:14:24 vml-load01 Keepalived_healthcheckers[8254]: Remote SMTP
> server [0.0.0.0]:25 connected.
> May 19 16:14:25 vml-load01 Keepalived_healthcheckers[8254]: Error
> processing HELO cmd on SMTP server [0.0.0.0]:25. SMTP status code = 501
> May 19 16:14:25 vml-load01 Keepalived_healthcheckers[8254]: Can not read
> data from remote SMTP server [0.0.0.0]:25.
> May 19 16:14:25 vml-load01 Keepalived_healthcheckers[8254]: Error
> processing HELO cmd on SMTP server [0.0.0.0]:25. SMTP status code = 501
> May 19 16:14:25 vml-load01 Keepalived_healthcheckers[8254]: Can not read
> data from remote SMTP server [0.0.0.0]:25.
> May 19 16:14:25 vml-load01 Keepalived_vrrp[8256]: VRRP_Instance(VI_1)
> Entering MASTER STATE
> May 19 16:14:25 vml-load01 Keepalived_vrrp[8256]: VRRP_Instance(VI_1)
> setting protocol VIPs.
> May 19 16:14:25 vml-load01 Keepalived_healthcheckers[8254]: Netlink
> reflector reports IP 10.0.200.60 added
> May 19 16:14:25 vml-load01 Keepalived_vrrp[8256]: VRRP_Instance(VI_1)
> Sending gratuitous ARPs on eth1 for 10.0.200.60
> May 19 16:14:30 vml-load01 Keepalived_vrrp[8256]: VRRP_Instance(VI_1)
> Sending gratuitous ARPs on eth1 for 10.0.200.60
>
>
>
>
>
> _______________________________________________
> Please read the documentation before posting - it's available at:
> http://www.linuxvirtualserver.org/
>
> LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
> Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
> or go to http://lists.graemef.net/mailman/listinfo/lvs-users
>
_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users
|