LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: Release new code: Scheduler for distributed caching

To: Thomas Proell <Thomas.Proell@xxxxxxxxxx>
Subject: Re: Release new code: Scheduler for distributed caching
Cc: Joe Cooper <joe@xxxxxxxxxxxxx>, lvs-users@xxxxxxxxxxxxxxxxxxxxxx
From: Julian Anastasov <ja@xxxxxx>
Date: Thu, 26 Oct 2000 03:04:18 +0000 (GMT)
        Hello,

On Wed, 25 Oct 2000, Thomas Proell wrote:

> Hi!
>
> > > # 1. Schedule based on Class C destinations: select squid server
> > > ipchains -A input -d 0.0.0.0/0.0.3.0 80 -p TCP -m 1
> > > ipchains -A input -d 0.0.1.0/0.0.3.0 80 -p TCP -m 2
> > > ipchains -A input -d 0.0.2.0/0.0.3.0 80 -p TCP -m 3
> > > ipchains -A input -d 0.0.3.0/0.0.3.0 80 -p TCP -m 4
>
> Intresting. What will happen, if one cache is down, or comes up again,
> or if another is added for performance reasons? You'll have to use
> the netmask 0.0.4.0 if one is added or 0.0.2.0 if one is removed.
> How many IP-addresses will be re-assigned to a new cache with this
> change? AFAIK nearly all, since every cache won't receive every
> third, but every fourth address.

        I don't claim that this setup is perfect. But there are
always solutions. This setup follows the rule about the static
mapping. Get one machine from Joe, they can be up for six months, you
don't need to remove it from the cluster and the mapping will
always work :)

> So, if changes are made, you'll experience very bad cache hits.

        Of course but you can always play games with these rules.

> This is not the case with consistent hashing. Only the theoretical
> minimum quantity of addresses are reassigned.

        Handling this case is a MUST for such kind of scheduling.

> Next: WE need tunneling. The squid is configured as a transparent
> proxy and will answer directly to the client, what improves the
> speed dramatically.
> Can you route through a tunnel with your solution? I don't think so.
> If this is possible - tell me. Would be a nice chapter in mt thesis.

        Just change "dev eth0" with "dev tunlX" and add the rules
to create the tunnels. I don't know for many ways to setup tunnels.

> > >         Yes, it is simple even without using LVS :) You can even
> > > create netfilter module for 2.4 to mark the packets in different
> > > way (hashes, etc). Then it is a plain routing.
>
> Please don't tell me, that the work was for nothing :-)

        Why for nothing? This is the only way to implement something
new and useful. But without testing and comparing different variants?

        My thoughts are about the load split. I don't believe in
the main rule on which is based this scheduling, i.e. the assumption
that the director is authoritative master for the load information.
Even the WLC scheduler has a simple load balancing. IMO, blindly
selecting the real servers in a cluster environment can lead to more
problems. Because if the swapping is started it is very difficult
to escape from this situation. I have a setup where the real server's
load is monitored and the problems are isolated in seconds. Without
such monitoring the servers are simply killed from requests. But only
the test results with the new scheduler can give more information about
the load balancing. And it depends on the load.

> Can we implement tunneling with a netfilter module?

        I don't understand. All features from 2.2 are present
in the 2.4 port.


Regards

--
Julian Anastasov <ja@xxxxxx>



<Prev in Thread] Current Thread [Next in Thread>