[Spce-user] [SEMS] Performance tuning, parameter USE_THREADPOOL
Denys Pozniak
denys.pozniak at gmail.com
Tue Jan 8 13:54:16 EST 2019
Andrew,
Thanks for the detailed information! Will try to test this parameter.
ср, 2 янв. 2019 г. в 16:50, Andrew Pogrebennyk <apogrebennyk at sipwise.com>:
> Denys,
> we have not measured the performance impact with threadpool _disabled_.
> It is enabled in our SPCE builds. I think that it is not optimal to
> create a new thread for every session, but probably with average cc
> around 200 calls you won't notice any difference. Also we use SEMS
> mainly for signaling with a few exceptions in our Sipwise appliance and
> we do have better experience with threadpool stability-wise than without.
>
> Regards,
> Andrew
>
> On 12/21/2018 05:11 PM, Denys Pozniak wrote:
> > Hello!
> >
> > Could you explain how parameter USE_THREADPOOL affect on performance?
> > I use SEMS with sbc application with RTP relaying, average concurrent
> > call is around 200.
> >
> > I am about Makefile.defs:
> >
> > # compile with session thread pool support?
> > # use this for very high concurrent call count
> > # applications (e.g. for signaling only)
> > # if compiled with thread pool, there will be a
> > # thread pool of configurable size processing the
> > # signaling and application logic of the calls.
> > # if compiled without thread pool support, every
> > # session will have its own thread.
> > #
> > #USE_THREADPOOL = yes
> >
> >
> > --
> >
> > BR,
> > Denys Pozniak
> >
> >
> >
> >
> > _______________________________________________
> > Spce-user mailing list
> > Spce-user at lists.sipwise.com
> > https://lists.sipwise.com/listinfo/spce-user
> >
>
> _______________________________________________
> Spce-user mailing list
> Spce-user at lists.sipwise.com
> https://lists.sipwise.com/listinfo/spce-user
>
--
BR,
Denys Pozniak
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.sipwise.com/pipermail/spce-user_lists.sipwise.com/attachments/20190108/0b07ef4e/attachment-0001.html>
More information about the Spce-user
mailing list