also I dont know how to test for an idle connection on the director...
not much hope there unless you can do L7 (I don't know what L7 will look for,
the absence of a clean disconnect?). However as Malcom says in the L7 section
of the HOWTO, L7 should be done by the application, that's what the
application is for.
microsoft has a built in "network load balanceing" that is based purly on
network traffic between boxes. to make it more robust they have something
called "session directory" that up an authentication/identification will
redirect an incomeing comm to the approprate box in the "network load
balanceing cluster"
this style of load balanceing is fine for certian applications, and is
closer to an L7 approtch as where your session is being proccessed is
determined by your login/id instead of where your connection(s) are
comeing from.
it also does this by being pretty stupid and assumes you have a crapton of
bandwidth. As each box participateing in a "network load balanceing" setup
recieves a copy of everything and has to pick out what it is gona
proccess... (all machines participateing get a fake slaved mac address and
ip shared between them)(shared mac address being m$'s solution to the arp
problem?? i dunno)
however for terminal servers it is no good.. I need to account for %free
cpu and %free memory as improtant metics. (ie 1 real server with 400% load
(4xcpu@100%) and 30kbs network traffic with 4 users, 1 real server with
80% load and 900kbs traffic with 10 users.. i would want new users to
land on the lower cpu load server as i am not really bandwidth bound tell
i get above 1gb/sec (and if i hit that i have other problems) but the
windows way does not take any cpu/memory metrics into account)
--
expenisve 10k~ L7 routers have liecensed the spendy tech and do offer
some level of intigration with the m$ sesion directory to handle moveing a
user (instead of the same ip) back to the real server they are on.
maybe i have a miss-understanding of how the L7-lvs stuff works but at
least right now i dont see how it would be possible :/
--
lvs-dr is a pretty good fit sence i can use weight tables to adjust where
people land based on my own userland metrics altering the weight tables..
Terminal Services does lend itself favorably to a lvs-dr approach.
Incomeing traffic to the director consists of mostly keyboard and mouse
movement operations (only ocationaly does it spike if someone uploads a
file) so it is fairly light traffic. where outbound traffic is fairly
large but sent directly from real server to client (pushing out bitmap
updates of what the users desktop looks like)
there is a windows management tool that reports idle time but I am not
aware of mib/snmp way to export that information
so you have two problems
o keeping up idle (or backgrounded) sessions that arise from a clean
disconnect. You should be able to keep these open for any large time (eg
weeks) if they aren't using any significant resources.
o killing 100% CPU sessions from a dirty disconnect.
does the application know whether the client has done a clean disconnect or
not? I assume no or else you wouldn't be posting at all.
correct. The terminal server session does not know weather a disconnect is
clean or not. All it does is start recording idle time from the last
keybaord/mouse input received
applications running inside the terminal server session (ie m$ word)
usually have no idea what the diffrance between a terminal server session
and a normal desktop is.
What does the app vendor say about this problem?
M$'s documents are, use session directory and nlb. if you grow beyound
that buy one of these $$$ products.. ;p if only i had a random 10~20k
laying around...
How do you handle the problem when there is no LVS?
I only had a single terminal server. So if a user had a dirty exit then
either at 1 day of idle time they were killed off. Or if they reconnected,
they got their session back.
_________________________________________________________________________
Info: Email:
Joseph T. Duncan duncan@xxxxxxxxxxxxx
109 Kidder Hall
Corvalis Or 97333
|