Re: [squid-users] Set up a cluster of 10 squid servers using ~170GB of memory and no disk

From: Eliezer Croitoru <eliezer_at_ngtech.co.il>
Date: Thu, 03 Oct 2013 03:04:19 +0300

Hey Jérôme,

I think it is good to understand the size of the cache array..
As I suggested before there is a limit to squid instances..
once you intercept connections you are in a pinch!
the situation you are talking about is a bit different in case of
reverse proxies and forward proxies.
You also need to state the purpose of the proxies if it's intercept or
tproxy.

All the above matter in a way since in many cases it is better to use
some routing machine to do the balancing rather then use CARP array
clusters.

A routing system can survive lots more traffic just by the kernel which
has native SMP support by design..
All the above also takes in account that the linux OS has ip route cache
function that makes the routing decisions smoother then smaller machines
that cannot do the same..

So for an example:
Router 1: 192.168.100.254/24+192.168.101.254/24+external/subnet
machine 1:192.168.100.1/24+192.168.101.1/24
machine 2:192.168.100.2/24+192.168.101.2/24
etc..
a packet comes from the LAN and hits the Router 1(192.168.100.254)
towards 122.122.122.200:80
The router either has LB route decision for the next hop and selects one
of the 10 proxies IP(not machine but IP).
the packet being routed to the next hop which is machine 1.
machine 1 then in place intercepts the port 80 packet(iptables on the
router can mark packets of port 80 dst only as marked for a proxy route)
into port 3129.
Then the proxy route for 0.0.0.0 is through 192.168.101.254 which on the
router is marked as *outgoing* packet and routed using the external
interface towards the destination machine while the connection is
established between the client and the proxy.
The session from the proxy towards the LAN session is being marked and
routed back towards the proxy based on the route cache..
HERE this is where RP filtering is plays the good\bad guy :D
now every packet that is marked as let say 3000 is towards the proxy and
every 4000 mark is towards the external while 5000 is towards the LAN..
In the mean time the proxies uses HTCP to make sure that each and one of
them has or doesn't have the relevant object(HTTP URL+HEADERS, unlike
ICP that only applies the URL and can be in specific services the better
and faster choice..).
OK so this is almost the big picture on this small network..
so now the only limit is that squid suppose to serve about 150MBps which
is about 1228800 bps if I am not wrong about it.
OK so now there suppose to be a limit which is a bit odd..
SO how much RAM is needed for the routing system?? depends on the the
network traffic and load...
since every OS (linux) comes with a hard limit and softlimit that can be
tunned it can be tested...
Once we have a big system in hands with ram only we can use 10 machines
as a balanced base line that we can test squid and linux to stretch them
from 1GB which each machine takes the load of 100MB towards 200MB per
machine.. a cluster of 5..
Just a reminder that haproxy can load 100,000 requests per sec on a
core2quad or a core2duo machine on a testing environment.
Squid is a much complex piece of software then haproxy taking in account
tons of traffic each and every second towards the CPU and RAM.

Prepare a DNS CACHE for this network...(squid also has a DNS resolution
caching but only as good as needed for a HTTP proxy not.. BIND)

So now the network is about:
1GBx4 on the core router.
1GBx2 cable between each and every proxy and server to the LAN switch(x10)
1GBx2 cable for the DNS cache server.
Switch that can support this level of traffic which I think a simple
16x1GB ports of planet switch can lift.(the better is better)[is there a
way to build a switch to all of that in less then 20k$?]
once the lights are blinking you know that there is a dramatic network
traffic..

The no DISK part is good for a very specific purpose of TESTING..
a cache that uses more then 2GB will probably need fast SAS 15k disks
and no don't compare SSD to SAS since a spinning disk in a 15k rounds
per sec is very fast..
You can instead do a test at home and use a small very very very FAST
DOK to write a small fedora 4GB ISO.. compare that to a SAS drive..

Sibling proxy setup is better for the specific case but remember to add
one more interface and network switch to the traffic between the proxy
server traffic of only HTCP(We dont want HTTP headers to fly into the
wrong hands of whoever comes to work in the network).

I am sure someone will find this mailing list using a simple
www.squid-cache.org and google and he can feel free to ask whatever he
wants and there will be someone that will read it.

Eliezer

On 10/02/2013 12:02 PM, Jérôme Loyet wrote:
> Hello,
>
> I'm facing a particular situation. I have to set-up a squid cluster on
> 10 server. Each server has a lot of RAM (192GB).
>
> Is it possible et effective to setup squid to use only memory for
> caching (about 170GB) ? What directive should be tweaked ? (cache_mem,
> cache_replacement_policy, maximum_object_size_in_memory, ...). The
> cache will store object from several KB (pictures) up to 10MB (binary
> chunks of data).
>
> With 10 sibling cache squid instances, what should I use as type of
> cache peering ? (10 siblings or a multicast ICP cluster, other ?)
>
> I've searched a little bit on google but didn't find anything revelant.
>
> Thanks to you all,
> ++ Jerome
>
Received on Thu Oct 03 2013 - 00:04:34 MDT

This archive was generated by hypermail 2.2.0 : Thu Oct 03 2013 - 12:00:06 MDT