Hello,
On Tue, 15 May 2001, Wensong Zhang wrote:
> On Tue, 15 May 2001, [gb2312] carl.huang [üS½¨äh] wrote:
>
> > hi,
> >
> > http/1.1 provides persistent connection, that is several requests from
> > client can send to web server in the same tcp connection. That is to say
> > when we establish a tcp connection according to the first request, but the
> > following requests maybe contain different content, and the web server we
> > choiced before perhaps can't serve them. How to resolve it?
> >
>
> If the backend web servers supports HTTP/1.1, we can pre-established
> several connections to different web servers, and multiplex those
> connections when user requests comes. The overhead of establishing
Even for 1.0 real servers, I think. You are talking about
some kind of optimization if they support 1.1 but the problem is that
from one incoming HTTP 1.1 connection we have to:
- create many internal, to different real servers according to the
1.1 pipeline
- close after the first request and/or claim 1.0 support, i.e.
keep-alive=off
These conditions are required when the URL switching rules
are strict and the content is not same on all real servers. In such
cases we can't serve all requests from one real server.
See what google gives me, very interesting:
http://www.w3.org/Protocols/HTTP/Performance/Pipeline.html
results from this page:
bandwidth savings: 2-40%
modem download time: 60% compared to HTTP 1.0
> connections between the ktcpvs and web servers can be avoided, it should
> be more efficient.
Yes, may be many requests can be sent through one internal
connection. We have to follow the HTTP spec, of course.
I'm still thinking of the low 0.75microseconds syscall overhead
in Linux. It is interesting to see what will be the difference between
ktcpvs and utcpvs, i.e. similar server in user space :) Of course, may
be there will be overhead from the kernel-user copy, etc. But some
things are easier to implement in the kernel, of course.
> Regards,
>
> Wensong
Regards
--
Julian Anastasov <ja@xxxxxx>
|