LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: invoke scheduler for every received packet

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>, Horms <horms@xxxxxxxxxxxx>
Subject: Re: invoke scheduler for every received packet
From: Roberto Nibali <ratz@xxxxxxxxxxxx>
Date: Thu, 13 Jul 2006 09:44:24 +0200
Zdravo Viktor,

this is a second time I'am addressing list with this question. I have
been using LVS with ops patch, which enables invoking scheduler module
for every received packet (no connection caching).
I have just briefly skimmed over this patch and am a bit unsure how efficient it is, but it seems to not populate the template cache. Why invoking the scheduler module for every packet is special, I fail to see. Having non-persistent scheduling to does the same. But I've only looked at the patch for 2 minutes.
Thanks for the fast replay. Actually I'm not sure what non-persistent scheduling does, maybe it is solution for may problem, I'll check it out.

You don't need to, since it's clear to me what the OPS does now. For your case using a UDP-based application OPS makes sense. We're discussing the patch now, as you can see.

What I need and what OPS is doing is that every packet (eg. UDP) received for virtual service is passed to desired scheduler.

But only to the initially one chosen. If you chose SED scheduler for your service, it will be SED for the time being.

Which is not I believe LVS default behavior, because of connection hashing.

Correct.

So with OPS scheduler is invoked per packet basis and decision about destination server is made every time. This is important for protocols which have multiplexed connections over same IP and PORT pair. Such as SIP communication between two proxy servers.

Absolutely.

http://archive.linuxvirtualserver.org/html/lvs-users/2005-09/msg00214.html
Yes I'm aware of this patch.
still applies to the latest 2.6.x kernel with some fuzz, of course.
I meant what is status of patch for kernel 2.6.x and above. I've been
using it on 2.4.13.

The only problem I see hindering it from immediate inclusion is that potential user space breakage. In short time, I think it can be solved by adding another long option like "--flags" to ipvsadm, which queries the kernel for the flags being set. This would need another kernel function in ip_vs_ctl.c, since IIRC it's not there yet. I'd do it, however I have absolutely no time until Sunday. And if Sunday is again as sunny as it has been the past 2 months, I'll be outside doing sports and sun-bathing.

We have actually never reviewed this patch, so I wonder if it would be time to review it and submit for inclusion.
I'm aware that patch could be easily adapted for newer kernels because
of its simplicity. But I'm not sure is there need for this if exists
another way to achieve same thing with something already included in LVS
(maybe something like non-persistent scheduling as you have noted).

No, it's the right thing to do, especially because Julian wrote it :).

Could you give me an exact explanation what it does and how it's used?
Well, patch is used in very simple fashion. You just add switches -o or
--ops when creating virtual service. As Julian wrote this results in
connections used for only one packet scheduling. I suppose they are
expired as soon as they are created. This enables packet routing to
different destinations even if they are coming from same client port.

Either DH or SH could give you similar results, however it's not the same.

I hope I've been clear enough.

Yep, I have actually woken up :). I'm an "old fart" and my brain cells aren't that sharp anymore.

Cheers,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' | dc

<Prev in Thread] Current Thread [Next in Thread>