>
> Hello,
>
> Why you switched to private session? There are proxy server gurus
> that can help you. Now they can't see my reply :) Where are the
> secrets :) I prefer this thread to be public.
My fault, the message is back on the mailing list :)
Im putting your whole reply in, in case someone wants to follow...
> > > You want the client to reconnect again after the director is
> > > informed for the redirect?
> >
> > yep, i wont be sure the client even will re-connect, but if it does, the
> > next connections would be redirected to the node that owned the "write
> > locks".
>
> Yes, the template has timeout, we can't wait forever.
>
> > >This will break the rule that each persistent
> > > connection has the same destination (real server) as the connection
> > > template - the first connection still remains linked to the first real
> > > server even after the redirect.
> >
> > its this linking that the API would update, so next time the director
got
> > the linking from its hash tables (using client IP/server port if i read
> > correctly) the returned mapping would have an updated server ip.
>
> The template's real server is used for the next connections from
> this client.
correct
>
> > >Currently, may be this is not a problem
> > > because after the connection creation (the scheduling) the template is
> > > not referenced from the connection but I'm not sure for the future.
> >
> > Are you saying the template isnt looked up each time? only the first
time
> > the TCP connection is established? If the template is only looked up
once
> > per connection then we shouldnt have the problem of having the director
> > redirect the client to the incorrect real server mid TCP session.
> > Not sure if this relates to what you said..
>
> The schedulers are used only when the connections are created.
kewl then there shouldnt be as many hassles
>
> No, existing TCP sesssions can't be redirected but the 2nd and
> next connections can be scheduled to the new place.
>
> client -> real server1 SYN
> real server1 -> client SYN+ACK
> client -> real server1 ACK
> client <-> real server1 DATA (the real server selects
> new real server)
> real server1 <-> director inform the director's daemon to
> update the client's template to
> the new RS2
> real server1 <-> client We don't care what happens with
> the first session, i.e. can remain
> established. We can't move this
> session!
> client -> real server2 SYN
what would be REALLY kewl is if the TCP layer could transfer its session
state mid session to another real server in co-ordination with the director.
But thats only in fairy land :)
>
> Before the timeout for the 1st connection expires, the 1st
> connection uses rs1 and the next connections from this client and
> the template use rs2.
>
> > > The connections don't timeout immediately, i.e. the transports
> > > have some requirements in this direction, the TCP TIME_WAIT for
example.
> >
> > > But I agree that such feature is useful - the ability to create
> > > this kind of affinity. But there are implementation issues we must
> > > resolve for this to work.
> > >
> > > Currently, the services have persistent network mask, so this
> > > redirect will move all clients from the same network (oh, yes, may be
> > > you are using the default /32 mask) if we are going to play with the
> > > connection template.
> >
> > thought of that "sticky persistance"/proxy problem. Basically my server
will
> > have to ignore certain ip ranges and just put up with the performance
hit.
> > otherwise if heaps of people hit the server thru the one proxy, the real
> > servers would be constantly telling the director to swap that client
because
> > it is constantly using differant segments of the cache (1 client always
uses
> > the same cache segment but because muliple clients are using the one ip
the
> > cache segments getting used by that IP could be contolled by differant
real
> > servers).
>
> May be better to use the default /32 mask. I'm not sure but in
> this mailing list there are many proxy server gurus. I don't know if your
> setup is very different. In fact, the thread you renamed treats the
> consistent hashing - for now the only way to stop these redirects. One
> content can be found only in one proxy server in the ideal case. Is your
> case with transparent proxy server or it is not?
My server isnt a proxy server, its a server in its own right.
heres a brief outline of how the data is organised:
Server 1->n-> Site 1->n-> Client
basically 1 server holds many "sites". each "site" can be updated by
multiple clients. For each site there is a node (real server) which has
authority on the write locks (mutexes and such).
When a client connects to a real server that dosn't have authority for the
site the client belongs to, the real server has to talk to the other real
server that is the authority to get the write locks.
If the client is directly connected to the real server that has authority on
the site, all the locks are in process, much faster/lower overhead.
>
> In any case, it seems you are trying to provide equal load
> balancing for the proxy servers. The users voting for the consistent
> hashing cliam that the equal load will be achieved implicitly and one
> content will not be loaded <n> times. Of course, the load generated
> from other processes in the real servers is ignored and if the crond
> starts the updatedb or another admin from the team uses
> "tar xf big_file.tar" we only hope this load will not hurt the
> service.
>
I dont require consistant hashing because my server isnt a proxy server, but
i can see the logic an it makes total sence to me. With an API for the real
servers to update their load weighting on the director it would become even
better.
> The other solution is to use WRR scheduling with load monitoring
> software that reports the load and this information is used to set
> the real server weight. But in the case with the proxy servers this
> leads to downloading same content <n> times in the worst case, i.e. in
> each proxy servers. After this point you care only for the load balancing.
>
> The other thing (which I don't know) is what part of the
> downloaded content is not cached (cgi?). I.e. we don't care which real
> server downloads this content but we care for the balancing in this case.
> But I don't have such stats. I don't know what happens when these proxy
> servers are configured to query each other for the content. This avoids
> the external traffic too.
>
> > > So, we have to clear this idea, the well defined features
> > > will not create problems later.
> > Agreed
> >
> > > And I of course don't know your case and whether there are other
> > > solutions for your problem. Only you are sure that this feature
> > > is the only solution :) Some LVS forwarding methods allow you to
> > > redirect to real servers, i.e. ignoring the director. This is possible
> > > when the real servers are available directly to the clients. For web
> > > this can look in this way: redirect from www to www1. I don't know
> > > what can be the effect after all these redirects. May be you are going
> > > to double the number of requests? And there are cases that such
> > > features are very well handled using L7 switches.
> >
> > since ive created a server with its own protocol Layer7 switches wouldnt
> > know the ideal real server to redirect to (unless im mistaken of thier
> > capabilities).
>
> OK, the info is in the real servers. You can try to browse this
> mailing list, there are many ideas for implementing clusters from proxy
> servers. I'm not sure how many redirects will be send, (n-1)/n ? You are
> trying to keep one content in one cache or one client in one cache?
>
> > Im planning on using direct routing.
> >
> > Hope thats gets things clearer, unfortunatly im getting tired from
working
> > all day so i hope i make sence :)
> >
> >
> > Mark Pentland
>
>
> Regards
>
> --
> Julian Anastasov <ja@xxxxxx>
Mark
|