Hello,
On Wed, 9 May 2001, Radu-Adrian Feurdean wrote:
> > > kernel: IPVS: Incoming failed TCP checksum from bla.bla.bla.bla (size=20)!
> >
> > eth model? May be caused from remote attacks? I can see such
> > messages in my Linux 2.2 logs too.
>
> Same thing with eepro100 (intel card) and tulip or de4x5 (Quad D-link with DEC
> chipset). We had to turn all TCP options off on both director and real
> servers, because otherwise each and every packet generated by a linux client
> caused a similar message. Cables are good and switches are CISCO on all the
> network, so the packets are not trashed on our network (I hope).
May be these packets are damaged/invalid before entering your
network.
> > > 2.4.2+ipvs_0.2.6, 2.4.2+ipvs_0.2.7, 2.4.3+ipvs_0.2.8, 2.4.4+ipvs_0.2.11
> > > All these combinations (SMP based) crashed in less than 8 hours of high
> > > traffic. 2.4.2+0.2.7 resisted over a week-end at low traffic (~2.5 Mbps)
> >
> > We need this crash report! But with the latest versions, please.
>
> We will try to get something at the next director install or when new "SMP
> tests" are approved. Now that machine is running fine with UP kernel. The
> previous crashes brought the machine to a total lock, so nothing arrived to
> syslog. And the admins that were on duty in that period (one week-end,
> one night and one legal holiday) didn't have the inspiration to log the
> messeages from the serial console.
>
> Unfortunately, IPVS worked great while testing, and all problems appeared only
> in production, and unfortunately not during work hours.
:) The problems know when to appear. May be these hours are
work hours for others :)
> Tomorrow we'll start loadbalancing FTP. We'll prepare to get crash logs in
> case anything wrong would happen.
ok :)
> > Why do you expect the LVS to use the OUTPUT chain? You mention
> > the mangle table? Locally generated packets?
>
> Oops I missed that in docs. I need the mangle table in order to mark packets
> for CBQ shaping. Unfortunately I see that I can't mark packets based on the
> virtual service IP, since we use NAT. So I have to use other less flexible
> tricks to do that.
Currently, nobody uses the fwmark value after the routing when
forwarding packets. LVS even changes it in some cases, i.e. before
sending the packet to the POST_ROUTING. Hm, may be this hurts the QoS
users but in any case you can mark the packets only in the pre_routing
or in local_out (the last is not used for LVS). May be we have to revisit
this fwmark change in LVS. It was added to allow LVS to work with
netfilter conntracking and NAT.
OTOH, QoS is also running in PRE_ROUTING, priority 1:
net/sched/sch_ingress.c:ing_hook
> So that WAS the correct behavior (well, not the expected one).
The authors simply don't want the packets to traverse many
chains and particulary many FILTER chains. INPUT is traversed when
the packets are delivered locally, FORWARD for forwarded traffic and
OUTPUT for locally generated packets.
IMO, even Netfilter does not allow marking packets after they are
NAT-ed. The NAT process is always after the fwmarking. OTOH, with
LVS you can use the ingress queues but before NAT.
So, it seems some setups are not possible. There are too
many features that collide :)) What we see is that the user can't
control what chain the packet can traverse. For the same performance
reasons the chains are fixed. May be mangle table hook at post_routing
would be useful, at priority -150? Another hook, all people will be
happy :) But this is a hook that will serve only the QoS? Or may be
mangle table for everything allowed after the routing. Stupid. Packet
changes are not recommended at post_routing if these changes need
rerouting. I don't see a way for fwmarking after NAT. May be only
this post_routing hook, for fwmarking only, needed for QoS. May be
there is another solution. May be you will stick with the ingress
solution.
> Radu-Adrian Feurdean
> mailto: raf@xxxxxxxx
> -------------------------------------------------------------------
> "If the night is silent enough you can hear a Windows NT rebooting"
Regards
--
Julian Anastasov <ja@xxxxxx>
|