[Spce-user] kamailio-proxy.log error - mysql lost connection

Matthew Ogden matthew at tenacit.net
Tue May 28 08:04:08 EDT 2013


Why does sipwise crash though if its waiting for IO? Surely the call tearup
and teardown should just take longer to initiate?



*From:* spce-user-bounces at lists.sipwise.com [mailto:
spce-user-bounces at lists.sipwise.com] *On Behalf Of *Kevin Masse
*Sent:* 28 May 2013 01:21 PM
*To:* Martin Wong; Jeremie Chism
*Cc:* <Unnamed>
*Subject:* Re: [Spce-user] kamailio-proxy.log error - mysql lost connection



Good morning, SSD’s would not work well in this environment.  SSD’s are
fast but the number of rewrites IO’s and other issues do not make it a
fit.  The burnout rate on the SSD’s is much higher than even the SATA.  The
true answer here is SAS drives at 15K RPM.  They have the longevity and the
speed to avoid issues.



As mentioned earlier with Jeremie, we have learned the hard way about
hardware choices.

Your best choice for configuration would be, SAS drives, in RAID5 with 1
hot spare/swap and 16GB ram or better.



I hope this helps.



Kevin







*From:* spce-user-bounces at lists.sipwise.com [
mailto:spce-user-bounces at lists.sipwise.com<spce-user-bounces at lists.sipwise.com>]
*On Behalf Of *Martin Wong
*Sent:* Monday, May 27, 2013 11:20 PM
*To:* Jeremie Chism
*Cc:* <Unnamed>
*Subject:* Re: [Spce-user] kamailio-proxy.log error - mysql lost connection



Hi guys,



do you think SSD would cut it here? I would like your views on the write
speeds for SSD vs SAS 15K



Thanks

On Tue, May 28, 2013 at 11:36 AM, Martin Wong <
martin.wong at binaryelements.com.au> wrote:

Yup. Thanks for the advice and I agree.



When there's a high dialing subscriber and I "accidentally" access the CDRs
on the web page, the CPU spikes and everything dies.



The MYSQL seems to be the one which has the highest CPU usage at all times.



I'll be putting in a physical machine + SAS drives asap.



Thanks and appreciate your quick help.



On Tue, May 28, 2013 at 11:22 AM, Jeremie Chism <jchism2 at gmail.com> wrote:

CPU and ram usage stay low. If you see the load spiking high or staying
high get ready for a crash. I/O wait is the biggest issue to watch on
sipwise. I think Stephen and Kevin would back me up on that. 15K sas drives
have fixed any problems we were having.



On Monday, May 27, 2013, Martin Wong wrote:

Hi Jeremie, thanks for the info.



What RAID config do you have on the SASes?



Would probably take your advice on the drives. Considering it's on a VM on
a cloud provider...



Anything else like CPU / RAM which I need to be concerned with?



Thanks

On Tue, May 28, 2013 at 10:32 AM, Jeremie Chism <jchism2 at gmail.com> wrote:

I/O wait kills sipwise. Just speaking from experience. If you have dial
traffic and you are using sata drives or slow drives its just a matter if
time before it breaks. We aren't using a dialer and we use sas 15K drives
only b

Sent from my iPhone


On May 27, 2013, at 8:31 AM, Andrew Pogrebennyk <apogrebennyk at sipwise.com>
wrote:

> On 05/27/2013 03:28 PM, Martin Wong wrote:
>> Nothing in the mysqld.err that stands out. After a reboot, the issue
>> goes away.
>>
>> Not sure why.
>>
>> There's dialer traffic which makes lots of calls which probably is the
>> cause. Not to say that the server load is very high... the CPU is quite
low.
>>
>> Do you think it's because it's a virtual machine?
>
> That could be possible, but can't say for sure without knowing the
> underlying hardware. What kind of HDD/RAID does that machine have?
> Do you see high %wa percentage (IO wait) in 'top'? What is the disk
> utilization reported in 'iostat -dx1' when that customer is dialing?
>
> HTH.
> Andrew
>
>

> _______________________________________________
> Spce-user mailing list
> Spce-user at lists.sipwise.com
> http://lists.sipwise.com/listinfo/spce-user





-- 
Jeremie Chism
Triton Communications
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.sipwise.com/pipermail/spce-user_lists.sipwise.com/attachments/20130528/e60ebd6f/attachment-0001.html>


More information about the Spce-user mailing list