LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [lvs-users] LinuxWorld Expo

To: aaron@xxxxxxxxxxxxxx
Subject: Re: [lvs-users] LinuxWorld Expo
Cc: Nick Christopher <nwc@xxxxxxx>, lvs <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
From: Wensong Zhang <wensong@xxxxxxxxxxxx>
Date: Wed, 16 Feb 2000 01:28:39 +0800
Hi,

It sounds that you are proposing a match between two "different"
products. It seems that you don't know the whole history of
TurboCluster. :) In case that you don't know all of them or other
people don't know it, I present some of the history as follows:

  May 1998:     LVS project was started and first IPVS code released.
  Feb 1999:     I was notified by Pacific Hitech that they use the 
                IPVS code.
  Mar 1999:     Pacific Hitech demostrated TurboCluster in LinuxWorld
                Expo 1999 and announced that it was their own first 
                clustering technology, didn't mention any word about
                the LVS project. In fact, TurboCluster was the IPVS
                code plus their own cluster management & configuration
                programs. I said that Pacific Hitech didn't show any
                respect to other works. Later, the president of Pacific
                Hitech apologized for it and promised that it would
                not happen again.
  April 1999:   I accepted two do-it-yourself PCs donated by Pacific
                Hitech. no obligation on my side to them.
  May 1999:     TurboLinux post many messages about their own first
                clustering software on the news server, the freshmeat
                and so on. Again, no word about LVS in their annoncement.
                We used to discuss it a lot on the Linux-HA. Under our
                help, they changed their non-distribute/non-copy
                license just apply to their cluster management &
                configuration programs, not to the kernel and IPVS code,
                not to the whole TurboCluster programs. They explained
                that their lawyer didn't know the GPL license. :)
  Oct 1999:     TurboLinux said that they rewrote the IPVS code and
                called it IPCS. I used to download 
                cluster-kernel-4.0.5-19991009.tgz and have a look at it.
                Nothing surprised me. It is all based on IPVS code and
                ideas. IPCS only supports tunnelling/directrouting
                methods and weighted round-robin scheduling. The data 
                definition is similar to that of ipvs, packet handling
                and the scheduling algorithm is the same as those of ipvs, 
                I doubt if it can be called a complete rewrite, I think
                that they should acknowledge IPVS in their code too.

So, do you think it is necessary to do a match between them? :)

BTW, from June 1999 up to now, TurboLinux have done a lot of
demostrations and press release of TurboCluster in China too, and
rolls out their national training program. Again, no word is used to
mention LVS project, instead that they said that they brought the best
clustering technology to China, so that few people in China know that
their advanced product is actually based on a student's idea and code
in China. It is a joke, isn't it? :) *shrug*

Best regards,

Wensong


Aaron McKee wrote:
> 
> Wensong Zhang wrote:
> >
> > Actually, there was a joint demostration of
> > LVS/ldirectd/heartbeat/lvs-gui package in the LinuxWorld Expo. Horms
> > told me:
> >   "As I wasn't at the show I am still picking up reports of
> >   what happened but I understand that there was some
> >   heavy connections/second testing conducted on the 16 web server
> >   farm that was put together and it made Turbo Linux - in the booth
> >   opposite - look a little lean."
> >
> > ;-)
> >
> > Wensong
> 
> Hello guys,
> 
> Since I was probably the one that cited some internal benchmark
> statistics, I believe it's probably up to me to clarify the
> configuration. I'll preface this with the statement that our goal was
> not to hammer TurboCluster Server as hard as we possibly could. We were
> simply looking what it would take to achieve a reasonable value.
> 
> Internally, we've achieved 20,000 connections/s using our routing
> software. The configuration was 2 Celeron-based servers and 2
> Celeron-based clients. Additionally, we were using a stock untuned Linux
> 2.2.13 kernel. In our benchmark, the most significant bottleneck we
> identified was the Linux kernel itself.
> 
> The individuals I spoke with at VA Linux claimed 50,000 connections/s.
> If the above configuration setup refers to this value, it looks like 16
> real servers were required to generate this traffic. As VA makes
> top-notch excellent hardware, I suspect they were using higher end
> servers than our small Celeron systems. The VA engineers also mentioned
> that they were using a modified 2.2 or 2.3 kernel, with improved TCP/IP
> performance characteristics.
> 
> All of these numbers are relatively meaningless, however, without a
> standardized benchmark framework. As any benchmarker knows, any number
> can be achieved if you modify the test harness and setup sufficiently.
> :)
> 
> In the spirit of good healthy fun, perhaps it would be interesting to
> have a benchmark "cook-off" one of these days? We could either do this
> as an engineering project, just for the fun and curiosity of the
> participants, or as something more formal. Regardless of who comes out
> on top, it would probably be great information for each of our
> respective engineering teams to take back and use to further refine each
> of our products. To keep it friendly, I propose that the losing team buy
> the winning team a round of beer. :)
> 
> This would probably be most convenient for those of us in the Bay Area,
> although I'd be happy to tell our Beijing office to take Wensong Zhang
> out for a mao tai. :)
> 
> Best Regards,
> Aaron
> 
> --
> Aaron McKee
> Sr. Clustering Products Manager
> TurboLinux Inc.
>

----------------------------------------------------------------------
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
To unsubscribe, e-mail: lvs-users-unsubscribe@xxxxxxxxxxxxxxxxxxxxxx
For additional commands, e-mail: lvs-users-help@xxxxxxxxxxxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>