[Spce-user] virtualizing sip:provider CE

Andreas Granig agranig at sipwise.com
Tue May 17 10:31:35 EDT 2016


Hi,

My latest performance tests on a Dell R320 (4 cores @2.2GHz) with a pair
of PRO as only VMs on that host and 1 vCPU per host with 8GB RAM shows
that we top out at around 15 regs/sec and 10 calls/sec, with the CPU
being 100% utilized. IO is not a problem in this setup, iowait is pretty
much 0 with the disk utilization maxing out at ~50% in peaks.

The number of regs it can do without calls is somewhere around 80
regs/sec, but if you add some calls, it degrades quite quickly.

Whether or not it's still sufficient depends on your number of
subscribers you plan to put onto one VM, and their typical Expiration
time for registers. All in all it could still easily do ~10k subscribers
with a 1 hour re-registration rate, where you'd end up with ~1800
concurrent calls at 10calls/sec. Not too bad for 1 vCPU. Your mileage
might vary though.

As a comparison, on the same server in bare metal (4 cores @2.2GHz at
32GB RAM) you can do 250 regs/sec with 50 calls/sec without issues.

Andreas



On 05/17/2016 01:22 PM, Alex Lutay wrote:
> Dear Stephen,
> 
> This is super to hear you trust Sipwise NGCP as a basement for your own
> company! In short there is no strict limitations to run on
> virtual environment. While there some known minor issue there, like
> sems eats ~5% CPU when doing nothing (on polling new sip packats), etc.
> 
> Sure the hardware and software you choose for virtualisation is very
> important. The IO speed and latency is always a problem of any virtual
> env. Also VM neighbours sitting on the same hardware can create lack of
> IO in the most unexpected moment. So, be careful with sharing hardware
> with other VMs. NGCP relies on IO a lot.
> 
> For the high performance and high availability we recommend you to try
> our PRO solution. It guarantees 50 cps rate providing high availability
> for already established calls. Feel free to contact sales at sipwise.com
> for more information.
> 
> In the same time please share your experience with SPCE in virtual
> environment. We see the good theoretical possibilities there.
> 
> Thank you!
> 
> On 05/17/2016 05:20 AM, Stephen Donovan wrote:
>> I’ve been down this road before with limited success…however…I’m now in
>> a position where I have a lot more resources to do things right at my
>> disposal.  Is it still recommended that I run spce bare metal..or is
>> virtualization an option?  Any bottlenecks I should be looking at ? 
>> storage latency?...what about running on an array of SSDs vs spinning
>> disks… my main storage system is consumer grade 7200rpm drives.  I can
>> put together a set of 15k rpm SAS drives or SSDs easily. My big idea is
>> high availability.  I have an opportunity to set up a test environment
>> that can easily be moved between bare metal and vm for testing.
>>
>> Some (fuzzy) memories from a few years ago:
>>
>> SPCE seemed to do okay running as a VM on xenserver until we reached a
>> certain threshold of concurrent calls, I don’t remember that number..but
>> it wasn’t very big.  It seemed that IO wait was climbing at times,
>> causing call processing issues.  The storage system in question was a
>> ZFS RAID Z2 array comprised of 7200rpm desktop class drives that were
>> not the fastest in the world by far.  It was at that time I decided to
>> move back to bare metal and not look back.
>>
>> Now that I have my own company, I’m again looking at virtualization as
>> an option for high(er) availability being that a single hardware failure
>> will not take the infrastructure down.
>>
>> What amount of concurrent calls and/or calls per second could I expect
>> to process given a certain amount of hardware?
>>
>> The physical hosts in question will be dell R610 boxes with 2x xeon
>> x5550, 32GB RAM.  I plan to only run voice servers on this platform,
>> I’ll have another resource pool for other workloads.
>>
>> Since my last experience, xenserver has been greatly improved, as I’m
>> experiencing running most of my other servers on it at the current time.
>>
>> If it’s not a good idea, tell me so, I’ll  go another way, I’m looking
>> for experiences.
> 
> 



More information about the Spce-user mailing list