LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: [lvs-users] SSL persistence/offloading and IPVS-TUN

To: "LinuxVirtualServer.org users mailing list." <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: [lvs-users] SSL persistence/offloading and IPVS-TUN
From: Graeme Fowler <graeme@xxxxxxxxxxx>
Date: Wed, 06 Feb 2008 09:59:18 +0000
Hi

I'd have commented on this yesterday but didn't have time - pancakes
took precedence :)

On Tue, 2008-02-05 at 18:07 -0500, David Black wrote:
> Sure, will do.  Thus far I see Apache can be made to do it - at least
> 2.2, if not 2.0.
> But that would be moving the load balancing to Apache userland and such
> is not my first choice.

I've done something similar to this in the past which I think I
explained in the archives but I'll go over briefly:

LVS-NAT

Director has an "external" DIP and an "internal" DIP - DIPe and DIPi
respectively.

Each SSL site has its' own VIP on the same interface as DIPe - VIP1,
VIP2, VIPn.

Traffic to port 80 on those IP addresses is LVS'd through the the
realservers - RIP1, RIP2, RIPn.

Traffic to port 443 on those IP addresses is LVS'd through to a pair of
different realservers as proxies running Squid (you could just as easily
use Apache for this) in accelerator mode with an SSL virtual host
listening to different *ports* for each instance of a VIP, such that
VIP1:443 -> Squid1 or Squid2 on port 1443, VIP2:443 -> Squids:1444 and
so on.

The proxies forward traffic to the realservers.

The realservers respond to the proxies.

The proxies respond to the clients via the director.

It works in theory (and practice!) but has some pros and cons:

1. You don't end up with IP exhaustion in your platform. Imagine a basic
setup where VIP1:443 -> RIP(1-n):443 - that means VIP2:443 needs a
separate RIP on each realserver. Having 10 vhosts, all with SSL, in a
platform with 10 realservers, needs 100 RIPs (10 on each realserver).
Having 100 vhosts needs 1000 RIPs. Do that calculations - very quickly
this becomes almost completely unmanageable.

2. The environment your applications run in *is not* what you might
expect. If you have this setup, someone running a CGI which dumps %ENV
to the browser would start to see some crazy output like the RIP and
port not being their VIP on port 443. This can, believe it or not, cause
some banks to refuse to provide payment services because it is "not
secure". Go figure.

3. If your proxies go down, all your SSL sites die a horrid death.

4. Probably some other stuff which I forgot.

However, you offload the SSL overhead to other systems and keep your
realservers just serving basic data without encryption.

It's an idea, but I never got it into production and left the company
before they started offering SSL in that platform...

Graeme



<Prev in Thread] Current Thread [Next in Thread>