Re: [squid-users] issue with one of joe coopers modifcations

From: Greg <squid@dont-contact.us>
Date: Sat, 24 Mar 2001 11:23:11 +1100

i killed squid and restarted it yesterday

here the stats from the cachemgr page
Median Service Times (seconds) 5 min 60 min:
 HTTP Requests (All): 0.16775 0.22004
 Cache Misses: 0.52331 0.49576
 Cache Hits: 0.02069 0.01847
 Near Hits: 0.52331 0.49576
 Not-Modified Replies: 0.01164 0.01035
 DNS Lookups: 0.23291 0.23291
 ICP Queries: 0.00000 0.00000
to me the dns lookups are already ( i use the proxy server on this pc i am
writing this email on )it seems pretty fast to me, the only thing i noticed,
is, when i looked at the dns processes being used they all have being used,
but handled hardly any requests.

this proxy handles, a 1.5meg connection to the internet

if you think it is dns, then i'll have to run named on the proxy server or
something similar.

----- Original Message -----
From: "Joe Cooper" <joe@swelltech.com>
To: "Greg" <squid@australis.com.au>
Cc: "squid" <squid-users@squid-cache.org>
Sent: Saturday, March 24, 2001 10:08 AM
Subject: Re: [squid-users] issue with one of joe coopers modifcations

> Hmmm... 26GB is close to the limit for a 512MB machine. But not over
> it. You should be fine wrt memory.
>
> What bandwidth are you supporting with this box? Are you overloading
> the box or perhaps your DNS, such that requests in the queue grow too
large.
>
> 2.2.14 has no memory bugs that I'm aware of (we have had units in the
> field in the past running this version with no problems). But 2.2.14
> does have security problems, and should probably be upgraded (Red Hat
> has RPMs on their site--just follow their instructions for updating a
> kernel).
>
> What does your Squid process size look like when under load? I would
> expect it to be about 300MB. Where is the rest of it going on your
system?
>
> Greg wrote:
>
> > just so you the machine
> > its 512mb ram, linux 2.2.14-50
> > pentium II 333
> > ultra 160 30 gig scsi drive
> > handles about 500 to 1000 modem users and 6 lan users
> >
> > cache size is 26 gig.
> > 30x10=300
> >
> > so that means that i have 212 meg left or close to that figure, now i
know
> > linux will use up a bit of memory
> > but surely i wouldn't need more than 512 meg of ram???
> > Thanks
> > Greg
> >
> > i noticed you use 2.2.16, is 2.2.16 more stable and better than
2.2.14-5.0??
> > Thanks
> >
> >
> > ----- Original Message -----
> > From: "Joe Cooper" <joe@swelltech.com>
> > To: "Greg" <squid@australis.com.au>
> > Cc: "squid" <squid-users@squid-cache.org>
> > Sent: Friday, March 23, 2001 6:23 PM
> > Subject: Re: [squid-users] issue with one of joe coopers modifcations
> >
> >
> >
> >> Hi Greg,
> >>
> >> Using my instructions has nothing to do with having too little memory
in
> >> your machine to handle a cache that size. 73 L1 directories??? Are
you
> >> really using a cache_dir that large?
> >>
> >> You are simply filling up your RAM with an in-core index of the cache
> >> contents. This is normal behavior--Squid keeps a record of every
object
> >> in the store in memory. If your store is gigantic (as yours clearly
> >> is), and your memory is not gigantic to match, you will run out of
> >> memory. There is no leak, and there is no flaw in the malloc used by
> >> default in Red Hat 6.2.
> >>
> >> Lower your cache_dirs to something sensible (1GB for every 10MB of RAM
> >> is a safe number for a standard Squid compile--a little more RAM is
> >> needed for an async i/o compile). This too, is covered in my
> >> documentation for tuning Squid on Linux.
> >>
> >> Hope this helps.
> >>
> >> Greg wrote:
> >>
> >>
> >>> Hello.
> >>>
> >>> I changed the default first level cache directory from default 14 and
> >>> used his formula, i got 73, anyway getting to the point, basically
what
> >>> happens after about 3 to 4 weeks it uses up all the memory and hits
swap
> >>> space, i have tried, rebooting, no difference, and using the kill
> >>> command, (had httpd and cron running, thought they were bad, so i
killed
> >>
> >
> >>> them) its still made no difference, so i am now using the alernative
> >>> malloc (configure --enable-dlmalloc)
> >>>
> >>> and seeing if there is any other difference, the only thing i can't do
> >>> is make a custom kernel (i think there are compiler problems in my
> >>> version of redhat 6.2)
> >>>
> >>> thanks
> >>>
> >>> Greg
> >>
> >> --
> >> Joe Cooper <joe@swelltech.com>
> >> Affordable Web Caching Proxy Appliances
> >> http://www.swelltech.com
> >>
>
> --
> Joe Cooper <joe@swelltech.com>
> Affordable Web Caching Proxy Appliances
> http://www.swelltech.com
>
Received on Fri Mar 23 2001 - 17:23:13 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:58:50 MST