Re: Squid for HTTP load balancing ?

From: Alvin Starr <alvin@dont-contact.us>
Date: Mon, 5 Oct 1998 14:08:44 -0400 (EDT)

On Mon, 5 Oct 1998, Alex Rousskov wrote:

> On Mon, 5 Oct 1998, David Luyer wrote:
>
> > > Is Squid able to act as HTTP load balancer ? Idea: one Squid receives
> > > all requests for a www.***.* site and forwards the requests to a farm
> > > of httpd daemons behind a firewall, takes the responses and send them
> > > back to the browser.
> > >
>
> To spread the load, you can test drive CARP support in Squid. Perhaps
> with some tweaking for "special" URLs.
>
> > If you want to be smarter, you could try to maintain a shared memory segment
> > with one word containing the latest load average for each server
>
> Shared memory assumes that all Squids run on the same machine. Not a good
> idea in most cases. Note that the "redirector" sees all the requests that
> second-level Squids are processing. Thus, you can maintain a pretty good
> local estimate about the load on second-level servers if you want to try
> some smart load balancing.
>
> However, first-level cache will introduce latency for forwarding requests.
> Estimate the benefits twice before introducing one more level of
> indirection...

Take a look at www.webpersonals.com. I set up a Linux system running
squid there to operate as a front end accelerator and load distribution.

We ended up using a round robin scheduling scheme since it was the
easiest to implement and worked well enough for what we were doing.
 
At one point we were handling 4,000,000 hits a day to the proxy server
and if I remember correctly we were running at no more than about 10-15%
load. So I am not sure that you would need a hierarchy of squids to run a
significant web site.

Alvin Starr || voice: (416)585-9971
Interlink Connectivity || fax: (416)585-9974
alvin@iplink.net ||
Received on Mon Oct 05 1998 - 10:19:32 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:42:20 MST