Hi, all,
We run ipvs at host and container in the same host at the same time,
and ipvs at host will transport network traffic to ipvs in the
container. Then some problem happended to us, the detail is as
follows:
________________ _____________
| ___|___ |container1 |
| host bridge |_vethA_|----|192.168.1.232 |
|192.168.1.193/26 | |____________|
| ___|___ _____________
| |_vethB_|----|container2 |
|________________| |192.168.1.233 |
|_____________|
container1 and container2 connected each other by bridge on the host,
which is the gateway of the two containers. Run ipvs on the host with
VIP 172.17.169.208.
host:
$ ipvsadm -l -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.17.169.208:80 rr
-> 192.168.1.233:80 Masq 1 0 0
container2:
$ ipvsadm -l -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.1.233:80 rr
-> 14.17.xx.yyy:80 Masq 0 0 0
telnet from container1 to container2:
access by ipvs at host will failed:
$ telnet 172.17.169.208 80
Trying 172.17.169.208...
telnet: connect to address 172.17.169.208: Connection refused
direct access ipvs in container will success:
$ telnet 192.168.1.233 80
Trying 192.168.1.233...
Connected to 192.168.1.233.
Escape character is '^]'.
^]
telnet quit
Connection closed.
I think it's the ipvs_property flag make ipvs in container return
NF_ACCEPT for network traffic, which make traffic go up to tcp layer.
Please see ip_vs_in.
We should clear this flag when SKB's netns has changed, Any idea?
Thanks,
Ye
--
To unsubscribe from this list: send the line "unsubscribe lvs-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
|