LVS
lvs-devel
Google
 
Web LinuxVirtualServer.org

Re: [v2 PATCH 0/4] IPVS: Backup Adding Ipv6 and Persistence support

To: Julian Anastasov <ja@xxxxxx>
Subject: Re: [v2 PATCH 0/4] IPVS: Backup Adding Ipv6 and Persistence support
Cc: Hans Schillstrom <hans.schillstrom@xxxxxxxxxxxx>, LVS-Devel <lvs-devel@xxxxxxxxxxxxxxx>, "wensong@xxxxxxxxxxxx" <wensong@xxxxxxxxxxxx>, "daniel.lezcano@xxxxxxx" <daniel.lezcano@xxxxxxx>
From: Simon Horman <horms@xxxxxxxxxxxx>
Date: Mon, 8 Nov 2010 15:21:23 +0900
On Sat, Nov 06, 2010 at 08:57:11PM +0200, Julian Anastasov wrote:
> 
>       Hello,
> 
> On Sat, 6 Nov 2010, Simon Horman wrote:
> 
> >>    May be it is better svc to hold module refcnt for
> >>PE as currently implemented. If in backup the svc and dest
> >>are not found when creating connection with PE data then just
> >>ignore the connection.
> >
> >As far as I understand, the svc and dest existing hasn't
> >really been a requirement for syncrhonisation, except in corner cases.
> >Personally I think thats a good thing. But making it a requirement
> >would certainly simplify things.
> 
>       I mean, requirement only for connections with PE data.
> But lets leave it this way, so that we can support deferred
> binding to dest.
> 
> >>The PE name must match the PE attached
> >>to svc (ip_vs_find_dest). This check must exist. The benefit
> >>comes from the fact that svc is freed after all its connections
> >>are freed, cp->dest->svc is always valid. Then there is no
> >>need for cp->pe. ip_vs_conn_hashkey_conn() has checks for
> >>cp->dest, so there is no point to try to create synced
> >>connections in backup with PE but without cp->dest.
> >
> >But dest could be created as part of failover and thus
> >exist by the time any packets need to be forwarded, right?
> >
> >There are cases, such as where the backup is also a real-server
> >that its rather inconvenient for svc and dst to exist while
> >synchronisation information is being received.
> 
>       OK, then we should not reach request_module,
> new arg to ip_vs_pe_get() can specify that we call it
> from interrupt, so the PE must be already loaded as module.
> Then cp->pe can hold the reference to PE until
> we bind the template to svc and dest where svc->pe
> should be compared to ct->pe. ct->pe is needed only
> for this purpose because later it can be determined
> from svc.

Do you have a preference for this approach
over making ip_vs_pe_sip non-modular?

> But I see another problem which is not backup
> specific: how ip_vs_sip_ct_match knows that ct->pe_data
> is created by ip_vs_sip_fill_param and not by another PE?
> We need to compare p->pe with cp->pe in ip_vs_ct_in_get
> before calling ct_match.

Yes, I agree that is a problem.

In practice it won't be affecting anyone at this time
as there is only one pe.

How about this, which applies on top of
"IPVS: Add persistence engine to connection entry".

From: Simon Horman <horms@xxxxxxxxxxxx>
Subject: IPVS: Only match pe_data created by the same pe

Only match persistence engine data if it was
created by the same persistence engine.

Reported-by: Julian Anastasov <ja@xxxxxx>
Signed-off-by: Simon Horman <horms@xxxxxxxxxxxx>

Index: lvs-test-2.6/net/netfilter/ipvs/ip_vs_conn.c
===================================================================
--- lvs-test-2.6.orig/net/netfilter/ipvs/ip_vs_conn.c   2010-11-08 
15:18:57.000000000 +0900
+++ lvs-test-2.6/net/netfilter/ipvs/ip_vs_conn.c        2010-11-08 
15:19:02.000000000 +0900
@@ -354,7 +354,7 @@ struct ip_vs_conn *ip_vs_ct_in_get(const
 
        list_for_each_entry(cp, &ip_vs_conn_tab[hash], c_list) {
                if (p->pe_data && p->pe->ct_match) {
-                       if (p->pe->ct_match(p, cp))
+                       if (p->pe == cp->pe && p->pe->ct_match(p, cp))
                                goto out;
                        continue;
                }
--
To unsubscribe from this list: send the line "unsubscribe lvs-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

<Prev in Thread] Current Thread [Next in Thread>