Francisco,
Your explanation makes perfect sense and it is in-line with the
whitepaper published, but my issue is with this statement:
"So one of the -->most important thing<-- here, is that no director has
to put the virtual-MAC in the wire, as every director has to receive the
packet."
If every director receives every packet, how can you possible support
traffic beyond 100% of a single director? Lets say you have 5 directors
with 1gbit interfaces each, and you have 1.5gbit of traffic coming over
the wire. That 1.5gbit of traffic has to be mirrored on each of the 5
directors. Unfortunately, each of the 5 directors only has a 1gbit
link, so how can the director process all 1.5gbit of traffic?
Or, can active-active LVS not increase bandwidth?
---
Michael Spiegle
mike@xxxxxxxxxxxxxxxx
Francisco Gimeno wrote:
Hello
as far I remember the thing work something like this:
- All director nodes know about the others
- Each one have an ID ( for example, the MAC or the IP )
- Each director node can elaborate a sorted list based on that ID
- Heartbeat everywhere, so the list is dynamic list
- The "view" of each node should be the same for each node (ie: all nodes
should have the same list)
- They should have a virtual-MAC
Those requirements could be satisfied with a broadcast sync protocol (it could
be similar to the WCCP, for example)
for each packet it arrives,
- Make a HASH with the parameters you want to keep the __affinity__ (like src
IP, dst IP, ports, ...) to.
- Calculate HASH % numer_of_nodes ( % := module )
- If that value it's the order in the list for the node processing the packet,
the packet is accepted, if not, discarded.
As every packet go to every director...
So one of the -->most important thing<-- here, is that no director has to put
the virtual-MAC in the wire, as every director has to receive the packet. Arp
responses to the VIP should be the virtual-MAC, but it should be sent with a
bogus-MAC. With that, the responsible to route packets to the VIP, will send
the packets to that virtual-MAC. As the switch (L2) don't know the physical
port associated to it, it sends the packet to all the active ports that
hasn't a MAC associated which are the director's. If you use a HUB then,
there will not be this kind of problems (who ownes a HUB nowadays?).
I hope I help you understanding how it works...
BR,
Francisco Gimeno
Roberto,
My thoughts exactly! It doesn't seem like it should be possible, but
Horms sure knows his stuff. Maybe he can chime in and elaborate on
those details?
---
Michael Spiegle
mike@xxxxxxxxxxxxxxxx
Roberto Nibali wrote:
I just just built 2 fresh gentoo boxes for testing active-active.
I had
How is active-active possible
it's Horms experimental code called Saru. He explained it at OLS one
year you didn't come.
http://www.ultramonkey.org/papers/active_active/active_active.shtml
Downloaded and printed, will read this weekend. Although, if Horms
engineers something it's most likely flying anyway. So I just have to
understand how he cheated the TCP stack this time :). I see some
netfilter related stuff in it and I wonder if (from what I've seen)
his approach works for 2.6.x kernels with proper TCP state tracking,
TSO and USO? In 2.4.x where netfilter is mostly broken with regard to
TCP state tracking, such quirks might be possible.
Cheers mate,
Roberto Nibali, ratz
_______________________________________________
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://www.in-addr.de/mailman/listinfo/lvs-users
_______________________________________________
LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
or go to http://www.in-addr.de/mailman/listinfo/lvs-users
|