Hi,
16k unified cache. :-/
Yep, like most of them.
Make sure that your I/O rate is as low as possible or the first thing to
blow is your CF disk. I've worked with hundreds of those little boxes in
all shapes, sizes and configurations. The biggest common mode failures
were CF disk due to temperature problems and I/O pressure (MTTF was 23
days); other problems only showed up in really bad NICs locking up half
of the time.
I haven't ever had an actual CF card blow on me.
:) Sorry, blow is exaggerated, I mean they simply fail because they only
have limited write capacity on the cells.
LEAF is made to live on
readonly media.. so its not like it will be written to a lot.
RO doesn't mean that there's no I/O going to your disk as you correctly
noted. The problem is that if you plan on using them 24/7 I suggest you
monitor your block I/O on your RO partitions using the values from
/proc/partitions or the wonderful iostat tool. Then extrapolate about 4
hours worth of samples, check your CF vendor specification on how many
writes it can endure and see how long you can expect the thing to run.
I have to add that thermal issues were adding to our high failure rates.
We wanted to ship those little nifty boxes to every branch of a big
customer to do a big VPN network. Unfortunately the customer is in the
automobile industry and this means that those boxes were put in the
stranges places imaginable in garages sometimes causing major heat
congestion. Also as it is usual in this sector of industry people are
used to reliable hardware and so they don't care if at the end of a
working day they simply shut down the power of the whole garage.
Needless to say that this adds up to the reduced lifetime of a CF.
If these are
overkill, I'd also consider a Net4501, which has a 133Mhz CPU, 64MB RAM,
and 3 ethernet.
I'd go with the former ones, just to be sure ;).
Forgive me for being frank, but it sounds like you wouldn't go with
either of them.
I don't know your business case so it's very difficult to give you a
definite answer. I only give you an (somewhat intimidating) experience
report, someone might just as well give you a much better report.
I'd need to balance about 300 HTTP requests per second, totaling about
150kB/sec, between two servers.
So one can assume a typical request to your website is 512 Bytes, which
is rather quite high. But not really an issue for LVS-DR.
I didn't clarify that. The 150kB/sec is outgoing. This isn't for all of
the website, just the static images/html/css.
So what are your expectations? Do you have SLA's on max. tranfer rates?
1) anybody else doing this?
Maybe. Stupid questions: How often did you have to failover and how
often did it work out of the box?
Maybe once every 2 or 3 months I'd need to do some maintenance and
switch to the backup. Every time there was some problem with noarp not
coming up or some weird routing issue with the IPs. Complexity bad. :)
So frankly speaking: your HA solution didn't work as expected ;).
2) IIRC, using the DR method, CPU usage is not a real problem because
reply traffic doesn't go through the LVS boxes, but there is some RAM
overhead per connection. How much traffic do you guys think these should
be able to handle?
This is very difficult to say since these boxes impose limits also
through their inefficiant PCI busses, their rather broken NICs and the
dramatically reduced cache. Also it would be interesting to know if
you're planning on using persistency on your setup.
Persistency is not a requirement. Note that most of the time a client
opens a connection once, and keeps it up as long as they're browsing
with keepalives.
Yes, provided most clients use HTTP/1.1. But since on an application
level you don't need persistency.
But to give you a number to start with, I would say those boxes should
be able (given your constraints) to sustain 5Mbit/s of traffic with
about 2000pps (~350 Bytes/packet) and only consume 30 Mbyte of your
precious RAM when running without persistency. This is if every packet
of your 2000pps is a new client requesting a new connection to the LVS
and will be inserted by the template at an average of 1 Minute.
As mentioned previously, you HW configuration is very hard to compare to
actual benchmarks, thus take those numbers with a grain of salt, please.
Thats not encouraging. I need something fairly cheap.. otherwise I might
as well go down the commercial load balancer route. What do others use
Well, I have given you number which are (at a second look) rather low
estimates ;). Technically, your system should be able to deliver
25000pps (yes, 25k) at a 50Mbit/s rate. You would then, if every packet
was a new client, consume about all the memory of your system :). So
somewhere in between those two numbers I would place the performance of
your machine.
I'm deeply troubled that I can't give you more specific information.
for setups like this? Cheapest 1U's I can find are about $750 USD.
That's about the price, yep, you won't get much cheaper.
Best regards,
Roberto Nibali, ratz
--
echo
'[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' | dc
|