LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Documentation

To: Lars Marowsky-Bree <lmb@xxxxxxxxx>
Subject: Documentation
Cc: Joseph Mack <mack@xxxxxxxxxxx>, linux-virtualserver@xxxxxxxxxxxx
From: Flavio Pescuma <edtfopa@xxxxxxxxxxxxxxxxxx>
Date: Wed, 23 Jun 1999 17:38:42 +0200
Hi,
As I said I've done some modification to the documentation, attached are
the htmls
I'm using 2.2.9 kernel and in the modifications I've done I assume that
ippfvsadm has change to ipvsadm also on kernel 2.0.x., if this is not
true then some of the modifications are wrong, so please let me know and
I'll correct it asap. 

Thanks,

Flavio
-- 
WIRE - Web & Internet Resources at Ericsson
Flavio Pescuma           phone: +46 (0)8 7263359
L M Ericsson Data AB     fax:   +46 (0)8 7217207
125 82 Stockholm Sweden  http://eriweb.ericsson.se

Creating Linux Virtual Servers

Creating Linux Virtual Servers

Wensong Zhang, Shiyao Jin, Quanyuan Wu
National Laboratory for Parallel & Distributed Processing
Changsha, Hunan 410073, China
Email: wensong@xxxxxxxxxxxx

Joseph Mack
Lockheed Martin
National Environmental Supercomputer Center
Raleigh, NC, USA
Email: mack.joseph@xxxxxxx

http://proxy.iinchina.net/~wensong/ippfvs

0. Introduction

Linux Virtual Server project.

The virtual server = director + real_servers

multiple servers appear as one single fast server

 Client/Server relationship preserved

  • IPs of servers mapped to one IP.
  • client see only one IP address
  • servers at different IP addresses believe they are contacted directly by the clients.

Installation/Control

Patch to kernel-2.0.36
Patch to kernel 2.2.9

 ipvsadm (like ipfwadm) adds/removes servers/services from virtual server. Used in

  • command line
  • bootup scripts
  • failover scripts
ippfvs = "IP Port Forwarding & Virtual Server" (from the name for Steven Clarks' Port Forwarding codes)

code based on

  • Linux kernel 2.0 IP Masquerading
  • 2.0.36 kernel (2.2.x under way)
  • port forwarding - Steven Clarke
single port services (eg in /etc/services, inetd.conf)

tested

  • httpd
  • ftp (not passive)
  • DNS
  • smtp
  • telnet
  • netstat
  • finger
  • Proxy
  • nntp (added on 26 May 1999, it works)
protocols
  • TCP
  • UDP
If the service listens on a single port, then LVS is ready for it.

additional code required for - IP:port sent as data, two connections, callbacks.

ftp requires 2 ports (20,21) - code already in the LVS.

Load Balancing

The load on the servers is balanced by the director using
  • Round Robin (unweighted, weighted)
  • Least Connection (unweighted, weighted)
GPL, released May 98 (GPL) http://proxy.iinchina.net/~wensong/ippfvs

1. Credits

LVS

  • loadable load-balancing module - Matthew Kellett matthewk@xxxxxxxxx
  • kernel 2.2 port, prototype for LocalNode feature - Peter Kese peter.kese@xxxxxx
  • "Greased Turkey" document - Rob Thomas rob@xxxxxxxxxx
  • Virtual Server Logo -"Spike" spike@xxxxxxxxxxx
  • chief author and developer -Wensong Zhang wensong@xxxxxxxxxxxx

High Availability

  • mon - server failures
  • heartbeat/fake - director failures
  • mon - Jim Trocki, http://www.kernel.org/software/mon
  • Fake (gratuitous ARP) http://linux.zipworld.com.au/fake/
  • heartbeat - Alan Robertson, http://www.henge.com/~alanr/ha
  • Coda - Peter Braam, http://www.coda.cs.cmu.edu/

2. LVS Server Farm

Figure 1: Architecture of a generic virtual server

The director inspects the incoming packet -
  • new request : looks for next server, creates an entry in a table pairing the client and server.
  • established connection: passes packet to appropriate server
  • terminate/timeout connection: remove entries from table
The default table size is 2^12 connections (can be increased).

Gotchas (for settting up, testing)

  • Minimum 3 machines, client, director, server(s)
  • cannot access services from director, need client
  • access from server to service is direct

3. Related Works

Existing request dispatching techniques can be classified into the following categories:
  • Berkeley's Magic Router, Cisco's LocalDirector (tcpip NAT, for VS-NAT)
  • IBM's TCP router
  • ONE-IP (all servers have same IP, no rewriting of reply packets, kernel mods to handle IP collisions)
  • Parallel SP-2 (servers put router IP as source address of packets, no rewriting of reply packets, server kernel mods)
  • IBM's NetDispatcher (for VS-Direct Routing)
  • EDDIE, pWEB, Reverse-proxy, SWEB (requires two tcpip connections)

3. Director Code

kernel compile options: Director communicates with real servers by one of
  • VS-NAT (Network Address Translation) based on ip-masquerade code
  • VS-TUN - via IP Tunnelling
  • VS-DR - via Direct Routing

Figure 1: Architecture of a generic virtual server

4.1. VS-NAT - Virtual Server via NAT

popular technique for allowing access to another network by many machines using only one IP on that network. Multiple computers at home linked to internet by ppp connection. Whole company on private IP's linked through single connection to internet.
 
 

VS-NAT: Diagnostic Features

  • servers can have any OS
  • director, real servers on same private net
  • default route of real servers is director(172.16.0.1) - replies return by route they came
  • packets in both directions are rewritten by director

VS-NAT Example

ipvsadm setup

Add virtual service and link a scheduler to it
ipvsadm -A -t202.103.106.5:80 -s  ip_vs_wlc.o    (Weight least connections scheduling module example )
ipvsadm -A -t202.103.106.5:21 -s  ip_vs_wrr.o    (Weght round robing scheduling module example )

Add real server and select forwarding method
ipvsadm -a -t 202.103.106.5:80 -R 172.16.0.2:80 -m
ipvsadm -a -t 202.103.106.5:80 -R 172.16.0.3:8000 -m -w 2
ipvsadm -a -t 202.103.106.5:21 -R 172.16.0.2:21 -m

Rules written by ipvsadm

Protocol Virtual IP Address Port Real IP Address Port Weight Forward Method
TCP 202.103.106.5 80 172.16.0.2 80 1 masquerade
172.16.0.3 8000 2
TCP 202.103.106.5 21 172.16.0.3 21 1 masquerade

Example: request to 202.103.106.5:80

 Request is made to IP:port on outside of Director
 
 

load balancer chooses real server (here 172.16.0.3:8000), updates VS-NAT table, then
 
packet source dest
incoming 202.100.1.2:3456(client) 202.103.106.5:80(director)
inbound rewriting 202.100.1.2:3456(client) 172.16.0.3:8000(server)
reply to load balancer 172.16.0.3:8000(server) 202.100.1.2:3456(client)
outbound rewriting 202.103.106.5:80(director) 202.100.1.2:3456(client)

VS-NAT Advantages

  • Servers - any OS
  • No mods for servers
  • servers on private network (no extra IPs needed as add servers)

VS-NAT Disadvantages

  • NAT rewriting, 60usec, rate limiting

  •  

     

     throughput = tcp packet size (536 bytes)/rewriting time (pentium 60usec) = 9Mbytes/sec = 72Mbps = 100BaseT

     25 servers will average 400KBytes/sec each
     
     

4. VS-TUN - Virtual Server via IP Tunneling

Normal IP tunneling (IP encapsulation)

  • IP datagram encapsulated within IP datagrams
  • IP packets carried as data through an unrelated network
  • At end of tunnel, IP packet is recovered and forwarded
  • connection is symmetrical - return packet traverses original route in reverse direction.

Tunnelling used

  • through firewalls
  • IPv6 packets pass through IPv4
  • laptop access home network from foreign network

VS-Tunnelling

  • director encapsulates and forwards packets to real servers.
  • servers process request
  • servers reply directly to the requester by regular IP. The return packet does not go back through the director.
For ftp, http, scalability
  • director is fast (no rewritting)
  • requests are small (eg http - "GET /bigfile.mpeg", ftp - get bigfile.tar.gz)
  • replies are larger than request and often very large
  • 100Mbps director can feed requests to servers each on their own 100Mbps network (100's of servers)

VS-TUN Diagnostic features

  • all nodes (director, servers) have an extra IP, the VIP
  • request goes to VirtualIP (VIP:port), not to IP on outside of Director
  • director, VIP is eth0 device
  • servers, VIP is tunl device
  • tunl device must not reply to arps (Linux 2.0.x OK, 2.2.x not OK)
  • client connects to virtual server IP (tunl doesn't reply to arp, the eth0 device on director connects)
  • replies from eth0 device on real server go to IP of client, ie use normal routing
  • servers must tunnel
  • servers can be geographically remote, different networks
Figure 4: Architecture of a virtual server via IP tunneling

Routing Table

Director

 link to tunnel

 /sbin/ifconfig eth0:0 192.168.1.110 netmask 255.255.255.255 broadcast 192.168.1.255 up
route add -host 192.168.1.110 dev eth0:0

ipvsadm setup (one line for each server:service)

ipvsadm -A -t192.168.1.110:80 -s ip_vs_rr.o    (Round robin scheduling module example )
ipvsadm -a -t 192.168.1.110:80 -R 192.168.1.2 -i
ipvsadm -a -t 192.168.1.110:80 -R 192.168.1.3 -i
ipvsadm -a -t 192.168.1.110:80 -R 192.168.1.4 -i

Server(s)

 ifconfig tunl0 192.168.1.110 netmask 255.255.255.255 broadcast 192.168.1.255
route add -host 192.168.1.110 dev tunl0
packet source dest data
request from client 192.168.1.5:3456(client) 192.168.1.110:80(VIP) GET /index.html
ipvsadm table is src 192.168.1.110, dest 192.168.1.2, director looks up routing table, makes 192.168.1.1 src, encapsulates  192.168.1.1(director) 192.168.1.2(server) source 192.168.1.5:3456(client), dest 192.168.1.110:80(VIP), GET /index.html
packet of type IPIP, server 192.168.1.2 decapsulates, forwards to 192.168.1.110  192.168.1.5:3456(client) 192.168.1.110:80(VIP) GET /index.html
reply from 192.168.1.110 (routed via 192.168.1.2) 192.168.1.110:80(VIP) 192.168.1.5:3456(client) ...

VS-TUN Advantages

  • servers can be geographically remote or on another network
  • higher throughput than NAT (no rewriting of packets, each server has own route to client)
  • director only schedules. For http, requests are small (GET /index.html), can direct 100's of servers.
  • total server throughput of Gbps, director only 100Mbps

VS-TUN Disadvantages

  • Server must tunnel, and not arp

VS-DR Direct Routing

Based on IBM's NetDispatcher

 Setup uses same IPs as VS-TUN example on a local network, with lo:0 device replacing tunl device

 lo:0 doesn't reply to arp (except Linux-2.2.x).

 Director has eth0:x 192.168.1.110, servers lo:0 192.168.1.110

 When sending packets to server, just changes the MAC address for the packet

 Differences:

  • director, servers must be in same network

VS-DR Advantages over VS-TUN

  • don't need tunneling servers

VS-DR Disadvantages

  • lo device must not reply to arp
  • director,servers same net

5. Comparison, VS_NAT, VS-TUN, VS-DR

property/LVS type VS-NAT VS-TUN VS-DR
OS any must tunnel (Linux) any 
server mods none tunl no arp (Linux-2.2.x not OK) lo no arp (Linux-2.2.x not OK)
server network private (remote or local) on internet (remote or local) local
return packet rate/scalability low(10) high(100's?) high(100's?)

6. Local Node

Director can serve too. Useful when only have a small number of servers.

 On director, setup httpd to listen to 192.168.1.110 (as with the servers)

 ippfvs -A -t 192.168.1.110 -R 127.0.0.1

7. High Availability

What if a server fails?
 
 

Server failure protected by mon. mon scripts for server failure on LVS website.

8. To Do

  • load-informed scheduling
  • geographic-based scheduling for VS-TUN
  • "heartbeat"
  • CODA
  • cluster manager (admin)
  • transaction and logging for restarting failed transfers.
  • IPv6.

9. Conclusion

  • virtual serving by NAT, Tunnelling or Direct Routing
  • tunneling and Direct Routing are scalable
  • fault tolerant
  • round robin or least connection scheduling
  • single connection tcp,udp services

Virtual Server via Direct Routing

This page contains information about how to use the Direcet Routing request dispatching technique to contruct a Linux virtual server.

Direct Routing request dispatching technique

This request dispatching approach was first implemented in IBM's NetDispatcher. The virtual IP address is shared by real servers and the load balancer. All real servers have their loopback alias interface configured with the virtual IP address, and the load balancer has an interface configured with the virtual IP address too, which is used to accept the request packets. The load balancer and the real servers must have one of their interfaces physically linked by a HUB/Switch. The architecture of virtual server via direct routing is illustrated as follows:

When a user accesses the service provided by the server cluster, the packet destined for virtual IP address (the IP address for the virtual server) arrives. The load balancer(LinuxDirector) examines the packet's destination address and port. If they are matched for a virtual server service, a real server is chosen from the cluster by a scheduling algorithm, and the connection is added into the hash table which record the established connection. Then, the load balancer directly forwards it to the chosen server. When the incoming packet belongs to this connection and the chosen server can be found in the hash table, the packet will be again directly routed to the server. When the server receives the forwarded packet, the server finds that the packet is for the address on the loopback alias interface and processes the request, finally return the result directly to the user. After the connection terminates or timeouts, the connection record will be removed from the hash table.

How to build the kernel

First, get a fresh copy of the Linux kernel source of the right version. Second, apply the virtual server patch (version 0.9 or later) to the kernel. Third, make sure the following kernel compile options at least are selected.

Kernel Compile Options:

Code maturity level options --->

[*] Prompt for development and/or incomplete code/drivers
Networking options --->
[*] Network firewalls
....
[*] IP: forwarding/gatewaying
....
[*] IP: firewalling
....
[*] IP: masquerading
....
[*] IP: ippfvs(LinuxDirector) masquerading (EXPERIMENTAL)
Virtual server request dispatching technique---
( ) VS-NAT
( ) VS-Tunneling
(X) VS-DRouting
And, you have to choice one scheduling algorithm.
Virtual server scheduling algorithm
(X) WeightedRoundRobin
( ) LeastConnection
( ) WeightedLeastConnection

[ ] IP: enabling ippfvs with the local node feature

Fourth, rebuild the kernel. Once you have your kernel properly built, update your system kernel and reboot.

How to build the 2.2.9 kernel


Kernel Compile Options:

Code maturity level options --->

     [*] Prompt for development and/or incomplete code/drivers

Networking options --->

     [*] Network firewalls
     ....
     [*] IP: forwarding/gatewaying
     ....
     [*] IP: firewalling
     ....
     [*] IP: masquerading
     ....

     [*] IP: masquerading virtual server support (EXPERIMENTAL)(NEW)
     (12) IP masquerading table size (the Nth power of 2)(NEW)
     <M> IPVS: round-robin scheduling(NEW)
     <M> IPVS: weighted round-robin scheduling(NEW)
     <M> IPVS: weighted least-connection scheduling(NEW)
     <M> IPVS: persistent client connection scheduling(NEW)
 

Finally, rebuild the kernel. Once you have your kernel properly built, update your system kernel and reboot.
 

My example for testing virtual server via direct routing

Here is my configure example for testing virtual server via direct routing. The configuration is as follows. I hope it can give you some clues.

The load balancer (LinuxDirector), kernel 2.0.36

ifconfig eth0 172.26.20.111 netmask 255.255.255.0 broadcast 172.26.20.255 up
route add -net 172.26.20.0 netmask 255.255.255.0 dev eth0
ifconfig eth0:0 172.26.20.110 netmask 255.255.255.255 broadcast 172.26.20.110 up
route add -host 172.26.20.110 dev eth0:0
ipvsadm -A -t 172.26.20.110:23 -s  ip_vs_wlc.o    (Weght least connection scheduling module example )
ipvsadm -a -t 172.26.20.110:23 -R 172.26.20.112 -g
The real server 1, kernel 2.0.36
ifconfig eth0 172.26.20.112 netmask 255.255.255.0 broadcast 172.26.20.255 up
route add -net 172.26.20.0 netmask 255.255.255.0 dev eth0
ifconfig lo:0 172.26.20.110 netmask 255.255.255.255 broadcast 172.26.20.110 up
route add -host 172.26.20.110 dev lo:0
When I am on other hosts, 'telnet 172.26.20.110' will actually connect the real server 1.

The other example of servers having different network routes

In the virtual server via direct routing, the servers can follows the different network routes to the clients. Here is a configuration example.

The load balancer (LinuxDirector), kernel 2.0.36

ifconfig eth0 <an IP address> ...
...
ifconfig eth0:0 <VIP> netmask 255.255.255.255 broadcast <VIP> up
route add -host <VIP> dev eth0:0
ifconfig eth1 192.168.0.1 netmask 255.255.255.0 broadcast 192.168.0.255 up
route add -net 192.168.0.0 netmask 255.255.255.0 dev eth1
ipvsadm -A -t<VIP>:23 -s ip_vs_wlc.o    (Weght least connection scheduling module example )
ipvsadm -a -t <VIP>:23 -R 192.168.0.2 -g
The real server 1, kernel 2.0.36
ifconfig eth0 <a seperate IP address> ...
Follow the different network route
...
ifconfig eth1 192.168.0.2 netmask 255.255.255.0 broadcast 192.168.0.255 up
route add -net 192.168.0.0 netmask 255.255.255.0 dev eth1
ifconfig lo:0 <VIP> netmask 255.255.255.255 broadcast <VIP> up
route add -host <VIP> dev lo:0

Last updated on May 16, 1999

Created on May 1, 1999

Virtual Server via NAT

This page contains information about how to setup a virtual server via NAT.

Network address translation

Due to the shortage of IP address in IPv4 and some security reasons, more and more networks use internal IP addresses (such as 10.0.0.0/255.0.0.0, 172.16.0.0/255.240.0.0 and 192.168.0.0/255.255.0.0) which cannot be used in the Internet. The need for network address translation arises when hosts in internal networks want to access the Internet and be accessed in the Internet.

Network address translation is a feature by which IP addresses are mapped from one group to another. When the address mapping is N-to-N, it is called static network address translation; when the mapping is M-to-N (M>N), it is called dynamic network address translation. Network address port translation is an extension to basic NAT, in that many network addresses and their TCP/UDP ports are translated to a single network address and its TCP/UDP ports. This is N-to-1 mapping, in which way Linux IP Masquerading was implemented. More description about network address translation is in rfc1631 and draft-rfced-info-srisuresh-05.txt.

Virtual server via NAT on Linux is done by network address port translation. The code is implemented on Linux IP Masquerading codes, and some of Steven Clarke's port forwarding codes are reused.

How does a virtual server via NAT work?

First consider the following figure,

When a user accesses the service provided by the server cluster, the request packet destined for virtual IP address (the external IP address for the load balancer) arrives at the load balancer. The load balancer examines the packet's destination address and port number. If they are matched for a virtual server service according to the virtual server rule table, a real server is chosen from the cluster by a scheduling algorithm, and the connection is added into the hash table which record the established connection. Then, the destination address and the port of the packet are rewritten to those of the chosen server, and the packet is forwarded to the server. When the incoming packet belongs to this connection and the chosen server can be found in the hash table, the packet will be rewritten and forwarded to the chosen server. When the reply packets come back, the load balancer rewrites the source address and port of the packets to those of the virtual service. After the connection terminates or timeouts, the connection record will be removed in the hash table.

Confused? Let me give an example to make it clear. In the example, computers are configured as follows:

Note real servers can run any OS that supports TCP/IP, the default route of real servers must be the virtual server (172.16.0.1 in this example). The ipfwadm utility is used to make the virtual server accept packets from real servers. In the example above, the command is as follows:

ipfwadm -F -a m -S 172.16.0.0/24 -D 0.0.0.0/0
The following figure illustrates the rules specified in the Linux box with virtual server support. 
Protocol Virtual IP Address Port Real IP Address Port Weight
TCP 202.103.106.5 80 172.16.0.2 80 1
172.16.0.3 8000 2
TCP 202.103.106.5 21 172.16.0.3 21 1
All traffic destined for IP address 202.103.106.5 Port 80 is load-balanced over real IP address 172.16.0.2 Port 80 and 172.16.0.3 Port 8000. Traffic destined for IP address 202.103.106.5 Port 21 is port-forwarded to real IP address 172.16.0.3 Port 21.

Packet rewriting works as follows.

The incoming packet for web service would has source and destination addresses as:
 
SOURCE 202.100.1.2:3456 DEST 202.103.106.5:80

The load balancer will choose a real server, e.g. 172.16.0.3:8000. The packet would be rewritten and forwarded to the server as:
 
SOURCE 202.100.1.2:3456 DEST 172.16.0.3:8000

Replies get back to the load balancer as:
 
SOURCE 172.16.0.3:8000 DEST 202.100.1.2:3456

The packets would be written back to the virtual server address and returned to the client as:
 
SOURCE 202.103.106.5:80 DEST 202.100.1.2:3456

How to build the 2.0.36 kernel

First, get a fresh copy of the Linux kernel source of the right version. Second, apply the virtual server patch to the kernel. Third, make sure the following kernel compile options at least are selected.

Kernel Compile Options:

Code maturity level options --->

[*] Prompt for development and/or incomplete code/drivers
Networking options --->
[*] Network firewalls
....
[*] IP: forwarding/gatewaying
....
[*] IP: firewalling
....
[*] IP: masquerading
....
[*] IP: ipportfw masq & virtual server support
And, you have to choice one scheduling algorithm.
Virtual server scheduling algorithm
(X) WeightedRoundRobin
( ) LeastConnection
( ) WeightedLeastConnection
Finally, rebuild the kernel. Once you have your kernel properly built, update your system kernel and reboot.
 

How to build the 2.2.9 kernel

Kernel Compile Options:

Code maturity level options --->

[*] Prompt for development and/or incomplete code/drivers
Networking options --->
[*] Network firewalls
....
[*] IP: forwarding/gatewaying
....
[*] IP: firewalling
....
[*] IP: masquerading
....

[*] IP: masquerading virtual server support (EXPERIMENTAL)(NEW)
(12) IP masquerading table size (the Nth power of 2)(NEW)
<M> IPVS: round-robin scheduling(NEW)
<M> IPVS: weighted round-robin scheduling(NEW)
<M> IPVS: weighted least-connection scheduling(NEW)
<M> IPVS: persistent client connection scheduling(NEW)


Finally, rebuild the kernel. Once you have your kernel properly built, update your system kernel and reboot.
 

Ipvsadm Setup


At last, build ipvsadm utility from ipvsadm.c program. And, the virtual server rules can be specified by ipvsadm. For example, for the rules in the table above, we can use the following commands.

Add virtual service and link a scheduler to it
ipvsadm -A -t202.103.106.5:80 -s  ip_vs_wlc.o    (Weight least connections scheduling module example )
ipvsadm -A -t202.103.106.5:21 -s  ip_vs_wrr.o    (Weght round robing scheduling module example )

Add real server and select forwarding method
ipvsadm -a -t 202.103.106.5:80 -R 172.16.0.2:80 -m
ipvsadm -a -t 202.103.106.5:80 -R 172.16.0.3:8000 -m -w 2
ipvsadm -a -t 202.103.106.5:21 -R 172.16.0.2:21 -m


Last updated: 1998/12/5

Created on: 1998/5/28

Virtual Server via IP Tunneling

This page contains information about how to use IP Tunneling to greatly increase the scalability of a virtual server.

IP tunneling

IP tunneling (IP encapsulation) is a technique to encapsulate IP datagram within IP datagrams, which allows datagrams destined for one IP address to be wrapped and redirected to another IP address. IP encapsulation is now commonly used in Extranet, Mobile-IP, IP-Multicast, tunneled host or network. Please see the NET-3-HOWTO document for details.

How to use IP tunneling on virtual server

First, let's look at the figure of virtual server via IP tunneling. The most different thing of virtual server via IP tunneling to that of virtual server via NAT is that the load balancer sends requests to real servers through IP tunnel in the former, and the load balancer sends request to real servers via network address translation in the latter.

When a user accesses the service provided by the server cluster, the packet destined for virtual IP address (the IP address for the virtual server) arrives. The load balancer examines the packet's destination address and port. If they are matched for a virtual server service, a real server is chosen from the cluster by a scheduling algorithm, and the connection is added into the hash table which record the established connection. Then, the load balancer encapsulates the packet within an IP datagram and forwards it to the chosen server. When the incoming packet belongs to this connection and the chosen server can be found in the hash table, the packet will be again encapsulated and forwarded to the server. When the server receives the encapsulated packet, it decapsulates the packet and processes the request, finally return the result directly to the user. After the connection terminates or timeouts, the connection record will be removed from the hash table.

Note that real servers can have any real IP address in any network, they can be geographically dispersed, but they must support IP encapsulation protocol, and their tunnel devices are all configured as <Virtual IP Address>, like "ifconfig tunl? <Virtual IP Address>" in Linux. When the encapsulated packet arrives, the real server decapsulates it and finds that the packet is destined for <Virtual IP Address>, it says, "Oh, it is for me, so I do it.", it processes the request and returns the result directly to the user in the end.

How to build the 2.0.36 kernel

First, get a fresh copy of the Linux kernel source of the right version. Second, apply the virtual server patch (version 0.9 or later) to the kernel. Third, make sure the following kernel compile options at least are selected.

Kernel Compile Options:

Code maturity level options --->

[*] Prompt for development and/or incomplete code/drivers
Networking options --->
[*] Network firewalls
....
[*] IP: forwarding/gatewaying
....
[*] IP: firewalling
....
[*] IP: masquerading
....
[*] IP: ippfvs(LinuxDirector) masquerading (EXPERIMENTAL)
Virtual server request dispatching technique---
( ) VS-NAT
(X) VS-Tunneling
( ) VS-DRouting
And, you have to choice one scheduling algorithm.
Virtual server scheduling algorithm
(X) WeightedRoundRobin
( ) LeastConnection
( ) WeightedLeastConnection

[ ] IP: enabling ippfvs with the local node feature

Fourth, rebuild the kernel. Once you have your kernel properly built, update your system kernel and reboot. At last, cd the ipvsadm source and type "make install" to install ipvsadm into your system directory
 

How to build the 2.2.9 kernel

 
Kernel Compile Options:

Code maturity level options --->

     [*] Prompt for development and/or incomplete code/drivers

Networking options --->

     [*] Network firewalls
     ....
     [*] IP: forwarding/gatewaying
     ....
     [*] IP: firewalling
     ....
     [*] IP: masquerading
     ....

     [*] IP: masquerading virtual server support (EXPERIMENTAL)(NEW)
     (12) IP masquerading table size (the Nth power of 2)(NEW)
     <M> IPVS: round-robin scheduling(NEW)
     <M> IPVS: weighted round-robin scheduling(NEW)
     <M> IPVS: weighted least-connection scheduling(NEW)
     <M> IPVS: persistent client connection scheduling(NEW)

Fourth, rebuild the kernel. Once you have your kernel properly built, update your system kernel and reboot. At last, cd the ipvsadm source and type "make install" to install ipvsadm into your system directory
 

How to use it

Let's give an example to see how to use it. The following table illustrates the rules specified in the Linux box with virtual server via IP tunneling. Note that the services running on the real servers must run on the same port as virtual service, so it is not necessary to specify the service port on the real servers.
Protocol Virtual IP Address Port Real IP Address Weight
TCP 202.103.106.5 80 202.103.107.2 1
202.103.106.3 2
All traffic destined for IP address 202.103.106.5 Port 80 is load-balanced over real IP address 202.103.107.2 Port 80 and 202.103.106.3 Port 80.

We can use the following commands to specify the rules in the table above in the system.

              ipvsadm -A -t 202.103.106.5:80 -s ip_vs_wlc.o    (Weight least connections scheduling module example )

ipvsadm -a -t 202.103.106.5:80 -R 202.103.107.2 -i -w 1

ipvsadm -a -t 202.103.106.5:80 -R 202.103.106.3 -i -w 2

My example for testing virtual server via tunneling

Here is my configure example for testing virtual server via tunneling. The configuration is as follows. I hope it can give you some clues.

The load balancer (LinuxDirector), kernel 2.0.36

ifconfig eth0 172.26.20.111 netmask 255.255.255.0 broadcast 172.26.20.255 up
route add -net 172.26.20.0 netmask 255.255.255.0 dev eth0
ifconfig eth0:0 172.26.20.110 netmask 255.255.255.255 broadcast 172.26.20.110 up
route add -host 172.26.20.110 dev eth0:0
ipvsadm -A -t 172.26.20.110:23 -s ip_vs_wlc.o    (Weight least connections scheduling module example )
ipvsadm -a -t 172.26.20.110:23 -R 172.26.20.112 -i
The real server 1, kernel 2.0.36
ifconfig eth0 172.26.20.112 netmask 255.255.255.0 broadcast 172.26.20.255 up
route add -net 172.26.20.0 netmask 255.255.255.0 dev eth0
ifconfig tunl0 172.26.20.110 netmask 255.255.255.255 broadcast 172.26.20.110 up
route add -host 172.26.20.110 dev tunl0
When I am on other hosts, 'telnet 172.26.20.110' will actually connect the real server 1.

Last updated on May 16, 1999

Created on November 29, 1998

<Prev in Thread] Current Thread [Next in Thread>