On Thu, 30 Aug 2007, Graeme Fowler wrote:
> The simplest way to work around this, given a large enough pool of
> clients, is to use persistence with a timeout appropriate to your
The problem with that is that you'll always be on the same
realserver. The -DH scheduler is designed to work with
squids and will land the client on the squid that caches the
Graeme's way will work, but fetches will be slower (how much
slower I don't know, but the speed-up from differentiated
squids is required in commercial squid setups). As well all
squids will wind up with all content, rather than the
content being being spread around (ie each squid having a
> If you stop web browsing for more than 60 minutes, you may
> be asked to authenticate again to continue.
asking users to authenticate again to websurf, when they've
already authenticated for login, is a real pain. Is there
some other way to handle it? eg a single sign-on that works
for all users; radius which sets up iptables rules to stop
the machine from surfing until authenticated for websurfing?
Joseph Mack NA3T EME(B,D), FM05lw North Carolina
jmack (at) wm7d (dot) net - azimuthal equidistant map
generator at http://www.wm7d.net/azproj.shtml
Homepage http://www.austintek.com/ It's GNU/Linux!