Re: [squid-users] cache level "fsck" for SQUID ?

From: Marc Elsen <marc.elsen@dont-contact.us>
Date: Wed, 06 Jun 2001 09:31:53 +0200

>Remove the swaplog (defaults to the top level of the cachedir) while
>squid is offline. Remove the -clean one as well. That will force squid
>to rebuild the cache state data.

>Downside: you will lose all freshness data about the cache. So your hit
>rates will be lower than usual for a short while.

 Robin, thanks for your reply. I think the long term issue
is that I have been bitten bu buzgilla no 130 (which is the
fact that I observe a file desc. leakage , observable by
a small but steady growth of the number of Store Disk files
open in cachemgr.cgi).

Now the issue is that after a week of operation, squid thinks it is
still safe concerning the filedesc. issue, BUT my kernel runs out
on max open files (redhat 6.2).
THEN the squid maybe fooled by for instance updating swap.state
without any problems , but no longer be able to write a particular swap
files (I have seen the relevant errors in cache.log, too many files
open).

In that case I usually have to reboot my system, which I had to
do a number of times already, but it leads to an increasing chance
of tcp_swap_fail misses on the long term.
So I don't know whether the squid code is safeguarded with respect
to 'atomic' updates of the swap environment, concerning accessing
of cache.swap and writing the swapfile.

In this context I was wondering whether anyone knows if the 2.4 series
kernel have a higher limit concerning max open files ?
In 6.2 this value seems to be 4096.
I am thinking of upgrading too rh 7.1,also because of the native
support for reiserfs.

Marc.

 

-- 
 'Love is truth without any future.
 (M.E. 1997)
Received on Wed Jun 06 2001 - 01:32:24 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:00:30 MST