> Hello Hayden,
>
> I sincerely apologize to you for the abrupt email breakage. I had some
> exams I needed to study for and it turned out that I also had to a
> presentation and finish a case study. I hope you understand.
A college student myself, I understand the situation. It's middterm time
here and they all want to give me tests on the same day.
>
> >>>IP Virtual Server version 1.0.8 (size=524288)
> >>
> >> ^^^^^^
> >>This is a rather high hash table size, you don't need to set it so high.
> >
> >
> > I set this size based off of another customer's requirements. I believe
> > he's incorrect in his requirements, but the box has enough memory to
> > support such a hash so I set it to that size. Does a large hash impose
> > any performance penalties similar to one that's too small?
>
> I'm not so fit in hashing algorithm theory but if you have a huge hash
> table and not so many addresses to fill in, you might spend more time
> looking up the service as if you make the hash table size considerably
> smaller and let the structure do linked lists for eventual double
> matches. YMMV, I bet you though, that your table is never full.
I haven't experienced any problems, and haven't received any speed
complaints. I'm going to leave the table where it's at, as we've got some
memory to burn.
>
> >>>When the phones start ringing, the output is the same except activeconn
> >>
> >> ^^^^^^^^^^^^^^^^^^^^
> >
> > I'm sorry for not being straight forward. What I mean is that when the
> > problem occurs, all traffic is directed to one server. The load is too
> > great for a single server so the service stop working. At that point the
>
> Ok, but if I understand you correctly here, then the experienced
> behavior of sending packets to one RIP is the expected one, no? I mean
> you state that the service stops working, so the real server which has a
> non-existent service will send a RST to a SYN and that's why you don't
> see any connections anymore. Maybe I misunderstood your statement above
> out of consideration for the initially described problem of yours.
You may be misunderstanding a bit. The service running on the realservers
seemed to be normal when this problem goes on. No connections appear in
the routing output from ipvsadm though. Making a request for the virtual
service directs me to the same realserver which is overloaded and doesn't
respond right away causing an outage.
>
> > phone starts ringing with a steaming customer on the other end, and we
> > have to do something. I was just trying to shed some humor on the
> > situation.
>
> :) I thought so, but I wasn't sure. Welcome to the club. I bet you have
> a bitch of a time explaining the inner workings of LVS to the customer
> in a way he is convinced you do the right thing (tm).
>
> >>Just to make sure: We only and always talk about the service defined by
> >> 216.151.100.246:80?
> >
> > We're only talking about this service. I'm not sure if the problem occurs
> > across the farm on port 5555, as only one server is active while the other
> > is on standby. I'll take note to check this service next time the problem
> > occurs. It's been close to 24 hours and no sign of the problem yet.
> > Nothing has changed.
>
> As you haven't posted any messages to this list so far I can assume that
> either:
>
> o you're were waiting for an answer from me
> o or the problem is solved
>
I was interested in getting a response to see what you thought, however
since fixing the network configuration issue on the one realserver over a
week ago, the problem hasn't shown up again.
> >>I don't get it. Do you understand you correctly, if I say:
> >>
> >>o everything works after a reboot for a while with service VIP:80
> >>o after a undefined amount of time your picture looks like follows:
> >>
> >> TCP 216.151.100.246:80 rr
> >> -> 216.151.100.253:80 Route 1 0 0
> >> -> 216.151.100.252:80 Route 1 2 0
> >>
> >>o no request will be forwarded to ~.253 anymore from that point on
> >>o flushing doesn't help (this is clear, actually)
> >>o we have only a trace of a working setup
> >
> > For the past three days excluding today the problem arose after about a
> > days worth of traffic. Your understanding my situation correctly, and your
> > picture is what I see.
>
> Just to round up my understanding: Is the service for RIP ~.253 up and
> running when this problem occurs? Can you do a 'telnet RIP 80' from the
> director (only if the service was bound to the RIP too in the beginning
> of course).
>
> > It occured 2 times so far, both approximately 24 hours apart. Traffic is
> > not cut off during the reboot.
>
> Hmm, again out of curiousity, what kind of NICs do you have in your
> director and if the problem occurs again, could you also append a 'ip -s
> link show dev eth0', please?
They're intel management adaptors 82559's I believe or something similar.
>
> > [spinbox@lb ~]$ /sbin/ip link show dev eth0
> > 4: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 100
> > link/ether 00:d0:b7:3f:ba:f9 brd ff:ff:ff:ff:ff:ff
> >
> > [spinbox@lb ~]$ /sbin/ip neigh show dev eth0
> > 216.151.100.253 lladdr 00:b0:d0:b0:10:09 nud reachable
> > 216.151.100.252 lladdr 00:b0:d0:b0:10:08 nud reachable
> > 216.151.100.1 lladdr 00:00:0c:07:ac:0e nud reachable
> > 216.151.100.251 lladdr 00:e0:81:01:32:54 nud reachable
>
> Ok.
>
> > this looks bad
> >
> > [spinbox@ads2 ~]$ /sbin/ip link show dev eth0
> > Cannot send dump request: Connection refused
> >
> > [spinbox@ads2 ~]$ /sbin/ip neigh show dev eth0
> > Cannot send dump request: Connection refused
> >
> > We were having network issues on .252 for some reason. The clients net
> > admin went in and "supposedly" straightened things out. I just noticed
> > that there were some differences in the way the network was brought up
> > when compared to .253. I modified .252 and now I see...
>
> What did you modify? Kernel?
>
The gateway for the virtual farm ip on the loopback wasn't specified as
216.151.100.246. I changed that. Unfortunately I don't remember if I
changed anything else. No kernel mods were made.
> > [spinbox@ads2 ~]$
/sbin/ip link show dev eth0
> > 2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 100
> > link/ether 00:b0:d0:b0:10:08 brd ff:ff:ff:ff:ff:ff
> > [spinbox@ads2 ~]$ /sbin/ip neigh show dev eth0
> > 216.151.100.251 lladdr 00:e0:81:01:32:54 nud reachable
> > 216.151.100.250 lladdr 00:d0:b7:3f:ba:f9 nud reachable
> > 216.151.100.253 lladdr 00:b0:d0:b0:10:09 nud reachable
> > 216.151.127.1 lladdr 00:00:0c:07:ac:0e nud reachable
> >
> >
> > on 216.151.100.253
> >
> > [spinbox@ads3 ~]$ /sbin/ip link show dev eth0
> > 2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 100
> > link/ether 00:b0:d0:b0:10:09 brd ff:ff:ff:ff:ff:ff
> >
> > [spinbox@ads3 ~]$ /sbin/ip neigh show dev eth0
> > 216.151.100.251 lladdr 00:e0:81:01:32:54 nud reachable
> > 216.151.100.250 lladdr 00:d0:b7:3f:ba:f9 nud reachable
> > 216.151.100.252 lladdr 00:b0:d0:b0:10:08 nud reachable
> > 216.151.127.1 lladdr 00:00:0c:07:ac:0e nud reachable
> >
> > I'm not sure what the difference is between these boxes. I'm pretty sure
> > their hardware is the same. The same distribution is on each box, along
> > with the same kernel and network module.
> >
> > Here's the script that brings up the network
> >
> > #load network modules
> > /sbin/insmod /lib/modules/`uname -r`/net/e100.o
> > /sbin/insmod /lib/modules/`uname -r`/net/3c59x.o
> > #enable hidden patch
> > echo 1 > /proc/sys/net/ipv4/conf/all/hidden
> > echo 1 > /proc/sys/net/ipv4/conf/lo/hidden
> >
> > #configure loopback
> > ifconfig lo 127.0.0.1 netmask 255.255.255.255
>
> Why not /8 as netmask?
For some reason this machine was receiving packets from other hosts on the
same net and responding to them. The customer's network admin changed
some settings. This is one of them. >
> > ifconfig lo:0 216.151.100.246 netmask 255.255.255.255 broadcast
> > 216.151.100.246 up
> > /sbin/route add -host 216.151.100.246 lo:0
>
> You shouldn't need this.
Why is that? because routing is added automatically in 2.2+ kernels?
I added it because it was in the howto for dr setups.
>
> > route add -net 127.0.0.0 gw 127.0.0.1 netmask 255.0.0.0 lo
>
> ditto. Any reason for this deviation?
Network admin did this, I'm not sure why.
>
> > #Configure public interface
> >
> > ifconfig eth0 216.151.127.7
> > route add default gw 216.151.127.1
> >
> > #setup alias to network the director resides on
> > ifconfig eth0:0 216.151.100.252
>
> What is the netmask?
255.255.255.0
>
> > route add -host 216.151.100.252 dev eth0
>
> Why? ifconfig takes care of the routes since 2.2.x kernels.
>
> > It's near impossible to watch the dump output on these heavily loaded
> > servers. For my testing I return all packets that match the address of
> > the client. Is this sufficient? When a request gets directed to .252,
>
> Yes.
>
> > .253 doesn't see anything. It's the same the other way around. At first
> > it looked bad as sometimes a request directed to .252 would also spew
> > packets on .253. I later realized that the request I was making, makes
> > another request to the virtual farm on port 80. That request can go to
>
> Why?
It's just the way our service works. Our service can be setup so that a
request calls another request to the service. For instance we run
adservers. It's possible for an ad request to call another ad request to
the service. Since we don't use persistence on this service, the first
request can go to one server but the request called from that request can
go to the second server. This almost makes our configuration look broken
if you don't know what's going on with our service. >
> > either server as we don't use persistence for that service and are using
> > rr scheduling.
>
> Yes. And that's what you should see. If you tcpdump on ~.252 and you do
> a 'telnet VIP 80' from outside the cluster you should once see the
> request and once not, or your setup is broken :)
>
> >>>14:12:27.500357 64.42.115.38.42710 > 216.151.100.246.http: . ack 4360 win
> >>>14480 <nop,nop,timestamp 379374993 56808229> (DF)
> >>>14:12:27.510547 64.42.115.38.42710 > 216.151.100.246.http: . ack 5808 win
> >>>17376 <nop,nop,timestamp 379374994 56808230> (DF)
> >>>14:12:27.514784 64.42.115.38.42710 > 216.151.100.246.http: . ack 7256 win
> >>>20272 <nop,nop,timestamp 379374994 56808230> (DF)
> >>>14:12:27.559938 64.42.115.38.42710 > 216.151.100.246.http: . ack 8704 win
> >>>23168 <nop,nop,timestamp 379374999 56808235> (DF)
> >>
> >>Just out of curiousity, is CIP a Windows client or do you have bandwidth
> >>limitation somewhere on a router?
> >
> > CIP is a linux machine in which I've opened up a telnet connection to port
> > 80 and made a request.
>
> Ok, interesting trace. Is the page you're fetching big?
It can't be too large, it's just an ad banner.
>
> > Definately lots of incoming connections. Usually around 10 million a day.
> > There aren't any log files that I'm aware of that are piling up a stream
> > of messages. The director is a piii 600 w 192megs. It's got one of those
> > intel 810 motherboards with an onboard intel management adapter. Our
> > distro uses sysklogd-1.3.31-17.
>
> Ok, do you use the intel driver or the vanilla kernel driver?
We use the drivers from intel. We're very good about keeping up to date.
I believe they're up to 1.6.29 and they've got a patch to .1 for 2.2.20
kernels. >
> >>Sorry, this doesn't help too much like that, I see that connections are
> >>being forwarded correctly. And the other problem is, that this is a case
> >>where everything works. I need this output when your problem shows up.
> >
> > I set the debug level to 4. I'm waiting for the problem to happen, well
> > not really, but I'm prepared.
>
> :) Let's hope it doesn't show up again.
>
> > Please let me know if you need anything else.
>
> Thank you very much for the completeness of your answer. I hope we clean
> up the problems in the next round. Definitely when the problems show up
> again.
>
> Best regards and sorry again for the late reply,
> Roberto Nibali, ratz
>
Thanks for all your input. Lets hope the problem has been solved with the
incorrect network configuration on the one realserver.
>
>
> _______________________________________________
> LinuxVirtualServer.org mailing list - lvs-users@xxxxxxxxxxxxxxxxxxxxxx
> Send requests to lvs-users-request@xxxxxxxxxxxxxxxxxxxxxx
> or go to http://www.in-addr.de/mailman/listinfo/lvs-users
>
Hayden Myers
Support Manager
Skyline Network Technologies
hayden@xxxxxxxxxxx
(888)917-1600x120
|