LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [lvs-users] One second connection delay in masquerading mode

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: [lvs-users] One second connection delay in masquerading mode
From: Sergey Urbanovich <surbanovich@xxxxxxxxxxxxx>
Date: Wed, 24 Jan 2018 14:53:35 -0800
Hi Andrew,

Thank you for your response.

Initially I’ve found the issue on another complex network configuration. There
was the same 1 second delay. I hope It’s the same issue as on 127.0.0.1 network
from the initial post.

* http server worked on overlay network (9.0.3.130:80), on host 10.10.0.21.
* http clients started on host 10.10.0.49 with virtual service 11.171.172.80:80.

Please see the following and attached dmesg logs.

[ 1091.367563] IPVS: Enter: ip_vs_out, net/netfilter/ipvs/ip_vs_core.c line 1132
[ 1091.367571] IPVS: lookup/out TCP 11.171.172.80:56750->11.171.172.80:80 not 
hit
[ 1091.367578] IPVS: lookup/in TCP 11.171.172.80:56750->11.171.172.80:80 hit
[ 1091.368097] IPVS: ip_vs_conn_drop_conntrack: dropping conntrack with 
tuple=11.171.172.80:56750->11.171.172.80:80/6 for conn 
11.171.172.80:56750->11.171.172.80:80->9.0.3.130:80/6:5
[ 1091.368102] IPVS: ip_vs_conn_drop_conntrack: no conntrack for 
tuple=11.171.172.80:56750->11.171.172.80:80/6
[ 1091.368106] IPVS: ip_vs_conn_drop_conntrack: dropping conntrack with 
tuple=11.171.172.80:56750->11.171.172.80:80/6 for conn 
11.171.172.80:56750->11.171.172.80:80->9.0.3.130:80/6:5
[ 1091.368109] IPVS: ip_vs_conn_drop_conntrack: no conntrack for 
tuple=11.171.172.80:56750->11.171.172.80:80/6
[ 1091.368114] IPVS: Unbind-dest TCP c:11.171.172.80:56750 v:11.171.172.80:80 
d:9.0.3.130:80 fwd:M s:5 conn->flags:10100 conn->refcnt:0 dest->refcnt:13880
[ 1092.368128] IPVS: Enter: ip_vs_out, net/netfilter/ipvs/ip_vs_core.c line 1132
[ 1092.368134] IPVS: lookup/out TCP 11.171.172.80:56750->11.171.172.80:80 not 
hit
[ 1092.368138] IPVS: lookup/in TCP 11.171.172.80:56750->11.171.172.80:80 not hit
[ 1092.368141] IPVS: lookup service: fwm 0 TCP 11.171.172.80:80 hit
[ 1092.368143] IPVS: lookup/in TCP 11.171.172.80:80->11.171.172.80:56750 not hit
[ 1092.368144] IPVS: ip_vs_wlc_schedule(): Scheduling...
[ 1092.368146] IPVS: WLC: server 9.0.3.130:80 activeconns 0 refcnt 13879 weight 
1 overhead 13878
[ 1092.368154] IPVS: Bind-dest TCP c:11.171.172.80:56750 v:11.171.172.80:80 
d:9.0.3.130:80 fwd:M s:5 conn->flags:100 conn->refcnt:1 dest->refcnt:13880
[ 1092.368158] IPVS: Schedule fwd:M c:11.171.172.80:56750 v:11.171.172.80:80 
d:9.0.3.130:80 conn->flags:10140 conn->refcnt:2
[ 1092.368163] IPVS: TCP input  [S...] 9.0.3.130:80->11.171.172.80:56750 state: 
NONE->SYN_RECV conn->refcnt:2
[ 1092.368165] IPVS: Enter: ip_vs_nat_xmit, net/netfilter/ipvs/ip_vs_xmit.c 
line 625
[ 1092.368172] IPVS: ip_vs_update_conntrack: Updating conntrack 
ct=ffff8801e67ab020, status=0x100, ctinfo=2, old 
reply=11.171.172.80:80->11.171.172.80:56750/6, new 
reply=9.0.3.130:80->11.171.172.80:56750/6, 
cp=11.171.172.80:56750->11.171.172.80:80->9.0.3.130:80/6:3
[ 1092.368183] IPVS: Enter: ip_vs_out, net/netfilter/ipvs/ip_vs_core.c line 1132
[ 1092.368187] IPVS: lookup/out TCP 9.0.3.130:80->11.171.172.80:56750 hit
[ 1092.368220] IPVS: Enter: ip_vs_out, net/netfilter/ipvs/ip_vs_core.c line 1132
[ 1092.368224] IPVS: lookup/out UDP 10.10.0.49:64000->10.10.0.21:38047 not hit
[ 1092.368244] IPVS: Leave: ip_vs_nat_xmit, net/netfilter/ipvs/ip_vs_xmit.c 
line 698
[ 1092.368538] IPVS: Enter: ip_vs_out, net/netfilter/ipvs/ip_vs_core.c line 1132

Attachment: ipvs.log
Description: Binary data


--
Yours,
Sergey Urbanovich

On Jan 24, 2018, at 2:11 PM, Andrew Smalley <asmalley@xxxxxxxxxxxxxxxx> wrote:

Hello Sergey

I had a quick look at your configuration and the first thing that
comes to mind is can you do this on a single host?

First what I see is you are in SYN_RECV as shown below. this means you
have not resolved the ARP issue. now again back to can you do this on
the same host like you are?

NONE->SYN_RECV conn->refcnt:2

I don't want to say NO however if you move your docker nginx guest
somewhere else and not use 127.0.0.1 as this is where the VIP/32 sits
on the real server and as LVS DR Mode can not talk to its self I
suggest this is because you cant do internal to director hosting of
services. or connect same host to VIP of the director.

Anyone wish to correct me ? It makes sense when the director is a
separate VM (Kvm etc) director where the docker host can share the
same host we now longer use 127.0.0.x addresses for communications.

Andruw Smalley

Loadbalancer.org Ltd.

www.loadbalancer.org
+1 888 867 9504 / +44 (0)330 380 1064
asmalley@xxxxxxxxxxxxxxxx

Leave a Review | Deployment Guides | Blog
_______________________________________________
Please read the documentation before posting - it's available at:
http://www.linuxvirtualserver.org/

LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://lists.graemef.net/mailman/listinfo/lvs-users
<Prev in Thread] Current Thread [Next in Thread>