LVS
lvs-users
Google
 
Web LinuxVirtualServer.org

Re: No buffer space available

To: Jeremy Kusnetz <JKusnetz@xxxxxxxx>
Subject: Re: No buffer space available
Cc: 'Peter Mueller' <pmueller@xxxxxxxxxxxx>, "'lvs-users@xxxxxxxxxxxxxxxxxxxxxx '" <lvs-users@xxxxxxxxxxxxxxxxxxxxxx>
From: Roberto Nibali <ratz@xxxxxxxxxxxx>
Date: Mon, 30 Sep 2002 22:25:59 +0200
Hello Jeremy,

Jeremy Kusnetz wrote:
Before I go any further, Roberto, THANK YOU VERY MUCH for the help you've
given me today!  It's very much appreciated!

You're very welcome, tell your friends about LVS. It ain't over yet, though!

Hmm, I tried the IPaddr that comes with heartbeat-0.4.9.1 (I'm currently
using 0.4.9)

I still can't get it to work with a /32.

Looks like IPaddr calls findif to figure out the netmask and broadcast of an
IP/cider.
    ^^^^^^^
  :) that's what I'm drinking right now.

Here are some samples:
./findif 216.163.120.4/24
eth0    netmask 255.255.255.0   broadcast 216.163.120.255
  -- looks good
./findif 216.163.120.4/31
eth0    netmask 255.255.255.254 broadcast 216.163.120.5
  -- looks good
./findif 216.163.120.4/32
eth0    netmask 255.255.255.0   broadcast 216.163.120.255
  -- Huh????

Send me the this tool findif, if it's not so big and I fix it.

For now I've hardcoded in the proper netmask and broadcast I want for a /32
in the IPaddr script.  Interestingly enough when I do this, and rerun findif
for /32 I get the right values:
./findif 216.163.120.4/32
eth0    netmask 255.255.255.255 broadcast 216.163.120.4

:) Funny. Well, I suggest you leave it like that then, if it really works.

So I know hardcoding these values in the script is not ideal, but it looks
like it works.  Any reason not to do so?  LVS seems to be functional on my
development environment.

I don't know you have to check back with the author of this script. Maybe we're both too stupid to understand and use it correctly.

Best regards,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq'|dc



<Prev in Thread] Current Thread [Next in Thread>