LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: 'Preference' instead 'persistence'?

To: lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Subject: Re: 'Preference' instead 'persistence'?
From: Roberto Nibali <ratz@xxxxxxxxxxxx>
Date: Thu, 10 Oct 2002 00:56:06 +0200
Hello,

Sure, but that takes a bit of time for us. I *hoped* it would be possible to get that waiting back to a few minutes at most, but it seems it's too difficult or not worth it...

It's not worth it IMNSHO. If you do it correctly you should have no problem. It's a typical upgrade procedure where you define a service level window with your customer in case something would break and where you have tested your upgrade before. It doesn't really matter how long the upgrade takes, as long as:

a) no, and I repeat no client realizes what is going on?
b) the paying customer will not be able to tell when you actually
   upgraded a RS

This implies that you have _no_ downtime at all (that's part of why you have a load balanced cluster) and dictates that the remaining RS together are fit enough to sustain the additional peak load put on them while one server is out for maintenance.

Hmm, I never had imbalance when doing maintenance work because the load
simply distributes on the remaining RS.

Not with persistency, unless you wait for quite a while. You won't get new clients, but it will take a while before existing clients disappear.

Also if you don't have persistency. It's very simple. Once you set the weight of a RS to 0, you wait until the amount of connections drops to zero and off you go. Doesn't matter if the service template was with persistency or not. And while waiting for the quiesced RS to calm down the other RS already had equalized the resulting additional load. I haven't experienced anything different in over 3 years of using LVS as a load balancing machine. I certainly did have my problems with hardware load balancers.

Point taken. Hmm, maybe the idea is not very good at all then. Each line further down your mail I'm more and more doubting it at least...

Good. Because the next step would have been to show you a simulation with tc-ng. I'm glad I don't have to do that.

- If the target weight for the realserver is 0 we always reassign

Already done, yes.
With persistency? With persistency no reassigning occurs AFAICS.

Ok, yes, no reassigning, but new connections go to new RS and old connections stay. I still like the idea of soft persistency. But I have to think more about it.

Again, point taken. This is indeed rather tricky business now I think of it again. For offloading a quiesced server this approach should work fairly well, but for equalizing it's dangerous.

Exactly.

Yes. I found it out the hard way. The earlier-mentioned motherboard problem took down a RS and the customer was assigned to exactly that RS (Murphy's Law I guess ;-). Anyway, the phone rang pretty quickly and it took me a while to

Urgh. I'm sorry but I have a hard time believing that ldirectord does this per default since I've seen some people actually using it. But I was already surprised by another tool last week in the "... buffer space available" thead.

figure out that the ldirecord upgrade caused this. I did notice the new option before upgrading, but I figured it to be actually useful and my testing was apparently flakey, so I ended up with a horribly broken quiescence option in a running live config...

Maybe you should contact the author of ldirectord and check back with him.

Note to our customers: All those 10 points are not true, it's just a
fairy tale. It would never work that way. [/me runs again like hell]
You might be surprised how far you could get in trying this, but only for testing. For a real-life situation I'd rather stay away from these practices ;-)

And honestly, that's what I once did. I still fear they will find out once (a simple look at the graphics adapter will reveil the thing). I would also not know the legal situation but since we're not living in the states this is not such a problem :).

On a more serious note, the use of Win2k as realserver is not too bad, but requiring a reboot for most service packs and hotfixes makes it dreadfully annoying at times.

I've replied to someone else about this but you're welcome to convince me to change my mind:

http://marc.theaimsgroup.com/?l=linux-virtual-server&m=103252009807274&w=2

The pilot is not 100% identical and I didn't really think about this yet, but it makes indeed sense to equalize them. I have plenty of other tasks though besides the LVS cluster, so I'm afraid this has to wait.

Have your boss put it on top of your TODO list.

Yeah, linux boxes tend to self-admin them once you really know how they work and have had the time to decently set them up. Our mail and dns servers hardly take maintenance anymore. It's the LVS business that's new and isn't as automated as it should be yet.

Ok.

See, my work experience with Russians tells me that there are (besides
thousands of other nice things) 3 things they produce for sure:

o vodka in all flavours and colours
o excellent mathematicians (Hello NSA, do you copy?)
o fully fledged (Oracle) DB admins with indepth Delphi knowledge

I haven't found the conjunction of those three items yet.
* rotfl *

But you have to agree that to a certain extent this is true, isn't it?

Thanks for a very intersting email exchange. I always enjoy it when people with new fresh ideas show up on this mailinglist.

Best regards,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq'|dc



<Prev in Thread] Current Thread [Next in Thread>