Re: [squid-users] Squid Question

From: Joe Cooper <joe@dont-contact.us>
Date: Sun, 04 Nov 2001 23:51:39 -0600

Squid is not a logging daemon. It keeps logs as a side effect of its
actual purpose (web caching, proxying, and webserver acceleration). I
really doubt Squid is the most efficient way to handle your problem, and
it certainly won't be done with a single server. To handle 100Mbits of
throughput (even without any reliance on disk I/O for caching), you will
need at /least/ two quite large Squid boxes in addition to your origin
servers running Apache plus the complexity of either balancing or
distributing requests across the multiple Squid front ends.

If your problem is only one of logging, then Squid is definitely not
your ideal solution. Better log handling is. The rsync suggestion is
probably a very good one for your case...if 'always-on' is required for
the log availability, then fixing the NFS connection is your best bet.
NFS is not unstable by design (though it certainly can be insecure as it
is designed...but that can be addressed). So if your current system
works, but is unreliable because of NFS instability, then get to work on
finding a workable NFS implementation. They do exist.

Take it from someone who takes every opportunity to push Squid when it
is an appropriate solution. In this case, it is not. Adding one more
HTTP parsing and serving layer to an already complex system just to
alter the way logs are handled is not the Right Thing to do here.

Sagi Brody wrote:

> Brian,
> What do you think it would take (hardware wise?) to support squid
> transparently logging 100Mbit worth of traffic? The OS can handle up to a
> few hundred megabits without a problem, but of course that is diffrent..
>
> We have about 3 log files per site and cant combine them because of the
> way the analyzers work. So with a few thousand sites that adds up. I do
> have other solutions besides the squid route. However, the squid idea
> would really keep things centralized and clean. Any ideas are greatly
> appreciated.
>
> Thanks,
>
> Sagi Brody
>
>
> On Sun, 4 Nov 2001, Brian wrote:
>
>
>>Squid runs as a single process, so it won't use that second processor
>>unless you run two of them. Our Celeron 366 systems do 10Mbit without a
>>problem -- 15 with some prodding. Therefore, my guess would be no.
>>
>>If you have enough disk space on each end, I would suggest running an
>>rsync server on the stats server and have the web servers upload their
>>logs during a slow period.
>>
>>As for cutting FDs, add vhost to the log format and log all of the sites
>>together. You would lose per-site log files moving to squid, anyway.
>>
>> -- Brian
>>
>>On Saturday 03 November 2001 04:05 am, Sagi Brody wrote:
>>
>>>Hello,
>>> I'm looking to use squid to ease the logging on my web servers.
>>>However, I've never used squid before and would like know if this is
>>>possible and how much of a load it will present. Currently, I'm NFS
>>>mounting all my webservers to a stats server which parses the log files.
>>>Because NFS connections are not as stable as I'd like them to be, I'm
>>>looking to put a transparent machine running Squid infront of the
>>>webserver, and have it do all the logging locally. This would reduce the
>>>nfs connections being made and create less room for error. It would also
>>>save me 2 or 3 FDs per site on the servers which also seems apealing.
>>>
>>>My question is what sort of memory and CPU load would squid present upon
>>>this transparent server? I'm thinking of using a dual p3 800 with a 1GB
>>>of ram and a few scsi HDs. Would this be enough to handle the logging of
>>>say 100Mbps of traffic? I'm NOT looking to do any caching at all. Any
>>>help is greatly appreciated.
>>>
>>>Thanks,
>>>
>>>Sagi Brody

-- 
Joe Cooper <joe@swelltech.com>
http://www.swelltech.com
Web Caching Appliances and Support
Received on Sun Nov 04 2001 - 22:48:09 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:03:53 MST