Hello,
On Mon, 13 Nov 2000, Anush Elangovan wrote:
> Hi,
>
> > > I am testing a ipvs-0.0.5 VS-DR setup. I have 2 real servers both
> > > running
> > > 2.4.0-test10
> > > (one on RH7.0 and the other RH6.2). I have all three (2 real and
> > > director)machines on the same hub on the 10.x.x.x network. Now when I
> > > launch
> > > testlvs from a machine on 192.168.0.x network, my director and the real
> > > server
> > > gives me warnings about the ip_conntrack module.
> >
> > LVS can't work together with ip_conntrack and iptable_nat. There is
> > double connection tracking. You still can use ipchains.o and the old
> > ipchains binary.
>
> Does this mean, it will not work (because I have LVS and ip_conntrack working
> without knowing the above fact)?
>
> So, can I ignore the ip_conntrack errors and continue using lvs(I built
> conntrack into the kernel) ?
You can't use both!
> Since, ip_conntrack is a part of the 2.4 netfilter module, can we use that to
> do the connection monitoring rarther than implementing it separtely in lvs?
It is not easy. There are some LVS features that can't be
implemented in the current 2.4 model:
- direct routing (because the input routing decision is strictly based
on the IP header fields, LVS can't stay in the pre routing chain and
to call the input routing)
- accept packets using advanced routing (requires LVS to work with the
packets after the routing)
- no rules to allow two or more connection tracking modules to coexist
> > But ip_conntrack is working with LVS's connections => big problems.
>
> Any previous experiences that anybody has had with them?
LVS/DR and LVS/TUN can work slowly with ip_conntrack, LVS/NAT
can't work with iptable_nat.
> > > 2) Does anyone have ideas on why the RH6.2 machine doesnt give me the same
> > > error.
> >
> > More RAM?
>
> yes.
The initial value of ip_conntrack_max depends on the RAM, from
net/ipv4/netfilter/ip_conntrack_core.c:
/* Idea from tcp.c: use 1/16384 of memory. On i386: 32MB
* machine has 256 buckets. 1GB machine has 8192 buckets. */
ip_conntrack_htable_size
= (((num_physpages << PAGE_SHIFT) / 16384)
/ sizeof(struct list_head));
ip_conntrack_max = 8 * ip_conntrack_htable_size;
printk("ip_conntrack (%u buckets, %d max)\n",
ip_conntrack_htable_size, ip_conntrack_max);
IMO, very bad estimation, the RAM is used but not the length
of the connection structure. You can tune the max value but the hash
table size remains small but it is not so fatal.
> > > 3) Not related to the above: How can I calculate the total number of
> > > TCP/UDP
> > > connections that my cluster can support in VS-DR, and also the number of
> > > connections that my realservers can support.
> >
> > There is "ip_vs" entry in /proc/slabinfo, there is the actual
> > size of one connection entry (also displayed on boot). Now divide the
> > amount of free RAM on this value. The result is a very optimisitic value
> > for the max number of entries you can create in LVS. But you actually
> > ask for the ip_conntrack_max limit.
>
> Im sorry, but I looked for help on how to read the slabinfo output, but could
> not get any. How do I find the free memory, what do the values in slabinfo
> mean. Any pointers on where it is described. Actually, I did want to know the
> maximum connections that my lvs can support.
Divide the free RAM to the object size, see below!
Columns (from mm/slab.c:proc_getdata):
1. name
2. active objects ***
3. number of objects
4. object size ***
5. active slabs
6. number of slabs
7. gfporder (order of pgs per slab)
ip_vs 0 0 128 0 0 1
Currently LVS uses 128-byte objects (connection table entries)
ip_conntrack 0 0 352 0 0 1
352/128=2.75
1GB RAM => 1073741824/352=3050402 entries
3050402 entries using 8192 buckets => 3050402/8192=372 entries per bucket
65536 buckets can speedup the lookups 10 times
OK, may be these calcs are not very accurate but are good for 128MB host.
You can safely increase the max value but this is not related to the
LVS work.
> Thanks
> Anush
Regards
--
Julian Anastasov <ja@xxxxxx>
|