[Spce-user] Rtpenginehardware sizing question

John Nash john.nash778 at gmail.com
Thu May 26 01:02:35 EDT 2016


kernel was not running ! But there was no warning message too. I was
running rtpengine without table=0 I was under the impression table=0 was
default setting but in fact unless you define table parameter it will not
try to use kernel.

On Tue, May 24, 2016 at 9:00 PM, John Nash <john.nash778 at gmail.com> wrote:

> I am running with log level = 2 and in /var/log/messages i do not see any
> warning of kernel. I will try to run it again with higher log lel and see
> what I get.
> My OS is cent OS (Linux centos 2.6.32-573.7.1.el6.x86_64 ) My LAN card is
> "Intel Corporation 82574L Gigabit Network Connection"
>
> What other things I should check?
>
>
> On Tue, May 24, 2016 at 6:16 PM, Andreas Granig <agranig at sipwise.com>
> wrote:
>
>> Hi,
>>
>> Are you sure kernel mode for rtpengine is enabled and working? Check the
>> logs after rtpengine startup to see if it logs a warning that kernel
>> mode is disabled.
>>
>> Andreas
>>
>> On 05/24/2016 11:57 AM, John Nash wrote:
>> > Thank you I will check it out. I know such questions are not easy to
>> > answer so I am also doing some experiments with my customer's VOIP
>> > traffic. I am running around  600 concurrent on a stand alone server
>> > (CPU: Intel(R) Xeon(R) CPU E5-2430 v2 @ 2.50GHz RAM:32 GB) but when I
>> > checked CPU using top commands I got worried as CU usage and load
>> > average seem to be quite high.
>> >
>> > top - 10:47:10 up 3 days,  8:59,  2 users,  load average: 3.44, 2.99,
>> 3.43
>> > Tasks: 453 total,   1 running, 452 sleeping,   0 stopped,   0 zombie
>> > Cpu(s):  1.5%us,  2.7%sy,  0.0%ni, 94.7%id,  0.0%wa,  0.0%hi,  1.1%si,
>> >  0.0%st
>> > Mem:  32842468k total,   886396k used, 31956072k free,    98204k buffers
>> > Swap:  1048572k total,        0k used,  1048572k free,   388460k cached
>> >
>> >   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>> >
>> >
>> >  3238 root      20   0 2163m  36m 1196 S 146.1  0.1   3919:05 rtpengine
>> >
>> >
>> >     1 root      20   0 17128 1220  984 S  0.0  0.0   0:02.50 init
>> >
>> >
>> >     2 root      20   0     0    0    0 S  0.0  0.0   0:00.01 kthreadd
>> >
>> >
>> >     3 root      RT   0     0    0    0 S  0.0  0.0   0:00.00 migration/0
>> >
>> >
>> > On Tue, May 24, 2016 at 3:22 PM, Andreas Granig <agranig at sipwise.com
>> > <mailto:agranig at sipwise.com>> wrote:
>> >
>> >     Hi,
>> >
>> >     There's been a talk at Kamailio World last week, see here:
>> >
>> >     https://www.youtube.com/watch?v=izwsE1XIc3Y
>> >
>> >     That might give you some insights into server dimensioning, tuning
>> and
>> >     scaling.
>> >
>> >     Andreas
>> >
>> >     On 05/23/2016 11:58 AM, John Nash wrote:
>> >     > I need to test rtpengine with kernel module (10000 concurrent
>> sessions
>> >     > (IPv4 only) or more if possible. I need to arrange hardware for
>> it but
>> >     > not sure on what kind of CPU i should use, how much RAM it should
>> have
>> >     > and most critical what kind of LAN card should  I have ?...one 1
>> Gbps
>> >     > LAN card or multiple?....If multiple then would I need to
>> configure
>> >     > somthing at Linux level so that CPU can utilize all lan card
>> >     efficiently?
>> >     >
>> >     > I know I am asking vague questions but opinions as per others
>> >     > experiences will be very beneficial for me.
>> >     >
>> >     >
>> >     > _______________________________________________
>> >     > Spce-user mailing list
>> >     > Spce-user at lists.sipwise.com <mailto:Spce-user at lists.sipwise.com>
>> >     > https://lists.sipwise.com/listinfo/spce-user
>> >     >
>> >     _______________________________________________
>> >     Spce-user mailing list
>> >     Spce-user at lists.sipwise.com <mailto:Spce-user at lists.sipwise.com>
>> >     https://lists.sipwise.com/listinfo/spce-user
>> >
>> >
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.sipwise.com/pipermail/spce-user_lists.sipwise.com/attachments/20160526/2e75c60e/attachment-0001.html>


More information about the Spce-user mailing list