Re: [squid-users] Ramdisks

From: Joe Cooper <joe@dont-contact.us>
Date: Fri, 23 Nov 2001 02:43:13 -0600

Sorry so long in reply to this, holidays slowed down my mail reading.

This works, up to a certain point, then you get severely diminishing
returns and eventually system instability.

Using a huge cache_mem has at least two flaws that I'm aware of...one of
which is an issue in Squid, the other is an issue of memory usage on my
current platform (but probably a problem on any OS).

The Squid problem is that cache_mem can be consumed in a short time by a
couple of very large objects, flushing out possibly extremely popular
small objects. Squid has no separate notion of popular and unpopular
memory objects--they are lumped together all in the same buffer space.
You can flush them based on your favorite policy, of course, and you can
define what the largest object to keep in mem is. But this doesn't
prevent Squid from using the space for unpopular large objects in transit.

The OS problem is that when you have a huge process that grows over time
(under Linux anyway--I don't know how other allocators will respond to
this, but I can't imagine it would be much different), the memory
becomes fragmented. Your giant process will soon find itself in a
position where it has to use swap to allocate a solid block of memory of
some size. There may be plenty of free memory--but it is severely
fragmented because of other objects on the system, buffer cache usage,
in transit file and network data, etc. This happened when I
experimented with it on my 2GB box (with 512MB for RAM disk) with a
cache_mem as low as 256 MB. If Squid were hacked to pre-allocate its
entire memory requirement at startup, this could probably be prevented,
but is generally not great design for other reasons.

Using tmpfs under Linux hits this same problem. It leads to too much
fragmentation in the memory pool, and causes Squid to drop out into swap
, or perhaps some of the tmpfs does--either way, the result is the same,
suddenly dog slow responses.

Brian wrote:

> Hmm... what about setting cache_mem very large and
> maximum_object_size == maximum_object_size_in_memory ?
>
> With a normal cache_dir, that should hold all recent objects in memory and
> automatically save them to disk in case of a shutdown. The only problems
> I can see there are
> 1) The system may not handle a 1.5GB process particularly well
> 2) Disk objects are not pulled back into memory, so file which only exist
> on disk would remain that way (even cached, that adds a couple ms of
> latency).
>
> -- Brian
>
> On Wednesday 21 November 2001 06:47 am, laurence@gazelle.net wrote:
>
>>Hi All!
>>
>>I was just pondering on how to make a superfast squid, and I wondered to
>>myself: Would it be possible to build the squid cache on a _large_
>>ramdisk (ie: 1-2GB), and then copy all of the cache information to the
>>hard disk if the program needed shutting down. Would this work? I can
>>understand it wouldn't be quite as stable, but is it possible? I'm
>>_sure_ someone's tried this before!
>>
>>Thanks!
>>
>>Laurence J Praties tel: 0871 871 0222
>>Systems Administrator fax:0871 871 0223
>>Gazelle Informatics Ltd laurence.praties@gazelle.net

-- 
Joe Cooper <joe@swelltech.com>
http://www.swelltech.com
Web Caching Appliances and Support
Received on Fri Nov 23 2001 - 01:40:24 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:04:27 MST