LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: ktcpvs

To: "'lvs-users@xxxxxxxxxxxxxxxxxxxxxx'" <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: ktcpvs
From: Wensong Zhang <wensong@xxxxxxxxxxxx>
Date: Fri, 11 May 2001 21:11:19 +0800 (CST)
On Thu, 10 May 2001, [gb2312] carl.huang [üS½¨äh] wrote:

> Wensong,
>
> I read the source code and test it simply. It looks work fine.
> But i think the overhead is too high. What will the mechanism and

Sure, the overhead is too high. That code was used to prove the concept of
application load balancing inside the kernel can work.

> architecture
> be about your next version?
>

Most likely, the multiple-thread event-driven architecture will be adopted
finally, but not the next version. :)

> I have an ideal, maybe it's silly. I want to finish three-handshake in ip
> layer in load balancer
> with user,load balancer will cache the "syn" and 'ack" packet from user and
> play back to real server
> later, and then user send the http request to load balancer,load balancer
> gets it and parses
> it and looks up the rule-table to get the destination, and then load
> balancer initializes a three-handshake
>  with real server and routes the request to real server. It works like
> half-proxy. The following
>  transaction will not proxied and load balancer works as a router because
> there is a data-structure
>  recording it's relation. Load balancer should produce three-handshake
> packet itself and write down the
> offset of seq and ack_seq.  The following packet will be modified with seq
> and ack_seq, perhaps with
> addr and port when it works as nat. the response from real server to user
> must go through load balancer.
> so it works in ip layer and the overhead maybe is low.
>
> Can it works?
>

Yeah, it can work.

This is a kind of tcp handoff protocol. I planned to explore it after
ktcpvs is done basically:
        KTCPVS + VS/DR (VS/TUN)  ==> our tcp handoff load balancing stuff

If you have time, please go ahead and work on it first. :)

Regards,

Wensong

> Regards,
> carl
>



<Prev in Thread] Current Thread [Next in Thread>