Re: ICAP connections under heavy loads

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Sun, 09 Sep 2012 21:23:36 +1200

On 9/09/2012 8:30 p.m., Amos Jeffries wrote:
> On 9/09/2012 5:03 a.m., Alex Rousskov wrote:
>> On 09/07/2012 09:51 PM, Amos Jeffries wrote:
>>> On 8/09/2012 3:17 a.m., Alex Rousskov wrote:
>>>> On 09/07/2012 08:33 AM, Alexander Komyagin wrote:
>>>>>>> However, as I stated earlier, the comm.cc problem (actually
>>>>>>> semantics
>>>>>>> problem) persists. I think it should be documented that second and
>>>>>>> subsequent calls to comm_connect_addr() do not guarantee connection
>>>>>>> establishment unless there was a correct select() notification.
>>> Which to me means we cannot rely on it at all for the timeout callback
>>> cases
>> Moreover, there is no need to call comm_connect_addr() in timeout cases.
>> The connection has timed out and should be closed. End of story. We do
>> not need to try extra hard and check whether the connection was
>> established at the OS level just when the timeout handler was scheduled;
>> we will not be able to detect some such cases anyway and all such cases
>> will be rare with any reasonable timeout.
>
> We are agreed.
>
> IIRC the race is made worse by our internal code going timout event
> (async) scheduling a call (async), so doubling the async queue delay.
> Which shows up worst on heavily loaded proxies.

Small correction. I mean the InProgressConnectRetry call doing that on
the select() notification side of the race.

Amos
Received on Sun Sep 09 2012 - 09:23:48 MDT

This archive was generated by hypermail 2.2.0 : Sun Sep 09 2012 - 12:00:05 MDT