RE: [squid-users] Using Squid as accelerator with failover mechan ism

From: <sean.upton@dont-contact.us>
Date: Wed, 07 Nov 2001 11:20:17 -0800

Rather than mirror, why not set up another node to serve the same content
and round-robin between them? Either use a file-server, or something like
rsync to keep them using the same content.

We have a multi-tiered cluster approach for my company's setup; all clusters
run heartbeat 0.4.9 (Linux-HA project) on Linux 2.4. We have a 2 node Squid
accelerator cluster which sits in front of an n-node web server cluster, and
a 2 node file/db server cluster. Each tier manages failures of its own
peers. That is, if squid or the box on which it runs dies on my accelerator
tier, my backup squid accelerator box takes over its IP address via
gratuitous ARP. The same can be said of my n-node web server cluster, which
is behind Squid in a round-robin fashion (handled by Squid); however, since
Squid only does round-robin, we rely upon cluster software installed on the
web server nodes to manage the takeover of themselves, though our
arrangement only allows for the failure of a single node; however, caching
can often mitigate some of this outage, if it is multi-node. What happens
in this case is Squid round-robins between n addresses, even though those
addresses are really served from n-1 servers, which means in a 3 node
cluster of web servers and 2 nodes die, you still have 2/3 of uncached
requests served without failure (if 1 node dies, you still have 100%
availability). YMMV if you use different clustering software.

And, as previously suggested, often priming Squid's cache with wget and a
cron job might not be a bad idea to increase your HIT ratio, ensuring better
content availability, especially if you serve a lot of static or
infrequently changing semi-dynamic content.

Sean

-----Original Message-----
From: Andre Ruppert [mailto:ar@vision-net.de]
Sent: Wednesday, November 07, 2001 6:02 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Using Squid as accelerator with failover
mechanism

Hello list,

I didn´t found anything similar in the archives,
so I post a new question:

I want to use squid as an accelerator for one dedicated webserver with
special behavoir:

The webserver is up: Squid refreshes immediately when requested and store in
cache
The webserver is down: Squid gives back the cache content _until_ the "real"
webserver is up again.

If that isn´t possible:
Perhaps I have to mirror the webserver´s content with "wget" or something
else to a webserver on the same machine where squid is installed.
The next question is: how force squid to use the mirror and switch back
later
to the real one...?

Any hints?

Greetings

--
Andre Ruppert
technische Leitung
<ar@vision-net.de>
=================================
Vision Consulting Deutschland oHG
Osterather Str.7
50739  Köln
tel    +49 221 917 15 33
fax   +49 221 917 15 38
email info@vision-net.de
www.vision-net.de
=================================
Received on Wed Nov 07 2001 - 12:20:28 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:03:57 MST