Re: Sun and Squid Cache

From: Joerg Moellenkamp <jm@dont-contact.us>
Date: Thu, 4 Feb 1999 12:07:30 -0000

Von: Henrik Nordstrom <hno@hem.passagen.se>
An: Rob Merkwitza <support@octa4.net.au>
Cc: squid-users@ircache.net <squid-users@ircache.net>
Datum: Donnerstag, 4. Februar 1999 03:26
Betreff: Re: Sun and Squid Cache

>See the Squid FAQ regarding Solaris 2.6 reporting filesystems as full
>when it looks like they aren't.
>
>No, there is no Squid command to clean your cache. Quickest way is to
>use newfs on the cache partition (also needed to fix the Solaris
problem
>which is said to originate from bad filesystem paramenters when
>filesystems are build on early patchlevels of 2.6).

Hmm, after nearly one and a half month with a ultra enterpise 250 under
solaris 2.6 , i think its a bad idea to locate squid on a sun ... at
least with slowlaris 2.6... when the cachedir utilisation is well under
90 % the cache performs under heavy load quite well, a HTTP Median
Service Time round a bout 300 milliseconds, when the the utilisation
gets higher than 90 % the HTTP Median Service Time gets up to 3000
milliseconds ... a corresponding peak goes through virtually all
snmp-variables that measures servicetimes ... at the moment we think
that ufs is absolute unusable for such a load ... perhaps veritas
performs better under such conditions, but dont want to spend 5000
german marks only for a filesysten.

The machine for this system is relativly fat ... one ultrasparc 300 MHz
Processor, 1 Gigabyte main memory , 23 Gigabyte in a raid 5, connecet
with ultrawide-scsi, raid via a hardware SCSI->SCSI-Controller, with 64
MB Cache onboard , sole for the cachedirectories, so i don't believe
that the hardware is a factor. (at the moment it peaks between 3000-3500
requests per minute at 5 minute median).

And this effects are independent from the parameters used at newfs ....
at the moment the ufs-filesystem is optimized for space ... but same
effects occurs with optimization to speed, with the addtional "nice"
effect described by rob...

at the moment i think about a cluster of 16 small linux/intelbased
system clustered via an foundry serveriron to cope with the load
estimated for the end of this year ... it will cost only 50000 German
Marks, an thats a heck more priceworthier than the sun , estimated price
round about 70000 german marks, that is not able to cope with our load
at the moment ....

by the way : an idea for squid 3.0 ( :-) ), a way to share the cachedirs
on different machines via Fibre Channel-Disks an a serverless
filesystem like gfs, to scale cachingperfomance and to have real
fail-over-solutions without losing the cache or wild solutions like
SCSI-Y-Cables.

bye
 joerg

-----
Joerg Moellenkamp fon: 0441-9988-8231
nordwest.net fax: 0441-9988-8205
Entwicklung/Systemadministration
Received on Thu Feb 04 1999 - 05:33:01 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:44:26 MST