traiano has asked for the wisdom of the Perl Monks concerning the following question:

Hi Monks

A little background:

I have a radius server (OSC Radiator on FreeBSD 8.2) which logs radius accounting packets to a postgresql 8.4 database, via the PERL DBD:Pg and DBI database interface modules. In general, I've been seeing performance issues when logging to the postgresql database, and when I turn off logging to the database, the performance issues vanish. The performance issues I see manifest on the NAS end of the "chain of information flow", basically, the server sending radius accounting packets to the radius server indicates dropped and retried attempts when I enable logging to the database on the radius server, and immediately begins working optimally when I disable the database on the remote radius server.

All this leads me to think that there is some kind of performance issue at either of 3 layers of the overall system:

1. The Radiator server software itself. 2. The Postgresql Server software 3. The interface between Radiator and Postgresql (PERL DBD::Pg and DBI)

Here is a somewhat simplified depiction of the informaton flow:

{NAS}---{radius accounting packets over network}--->{radius server software}--{PERL DBI, DBD}-->{Postgresql}{underlying server OS and hardware}

My Question is: How would I analyse (or otherwise isolate performance issues) around the DBD:Pg and DBI perl interface between the radius software and the postgresql database ?

Many thanks in advance for your wisdom.

Traiano

Replies are listed 'Best First'.
Re: DBI and DBD::Pg: Assessing Performance
by moritz (Cardinal) on Mar 15, 2012 at 08:09 UTC

    You can profile the Perl side with Devel::NYTProf (database access should show up as subroutine execute or st::execute or so), and also log execution time of the statements at DB level, then compare your findings. If the Perl side takes much longer than what posgres logs, you know it's DBI or DBD::Pg.

    How many log entries are written typically? is the IO load high on the server? Are you caching DB connections? (reconnecting to the DB for each log event would be rather slow; doing a single insert should not be very slow).

      Hi moritz

      Thanks, I will wrap my head around this module and let you know what I find.

      The postgresql server does an average of between 80 and 120 transactions per second 91:1 correspondence between transactions and logged packets, in this case).

      The IO load on the server doesn't see to be problematic, for example, here is a typical vmstat output during a high activity period:


      root@rad1 /usr/local/etc/radiator# vmstat 1 procs memory page disk faults cpu

      r b w avm fre flt re pi po fr sr mf0 in sy cs us sy id

      0 0 0 1787M 624M 247 0 0 0 230 0 2 119 1838 1240 0 0 99

      0 0 0 1787M 624M 10 0 0 0 4 0 7 148 2069 1487 1 0 99

      0 0 0 1787M 624M 1 0 0 0 0 0 2 120 1498 1247 0 0 99

      0 0 0 1794M 622M 2280 0 0 0 1893 0 0 37 1841 759 0 0 100

      0 0 0 1794M 622M 8178 0 0 0 8170 0 9 59 7580 1470 1 0 99

      0 0 0 1794M 622M 4931 0 0 0 4689 0 0 62 4527 928 0 1 99


        >{PERL DBI, DBD}-->{Postgresql}{underlying server OS and >hardware}
        If you can emulate your 80-120 transactions at this layer, you may found if it is postgres or not at least. I mean test script on this machine will show you something.

        On my old notepc and PostgreSQL 9.0.3, I run my test script and it shows 120 transaction exceeds one second. This is the result of defualt postgres configuration.

        count=120 With AutoCommit OFF=0.339 With AutoComit ON time=1.266
        And test code below.

        If you tune postgres.conf and optimize your PostgreSQL server, the result will change. For instance, if trun fsync option to "off", the result differs.

        With AutoCommit OFF=0.224 With AutoComit ON time=0.259
        *THIS OPTION SHOULD BE HANDLED CAREFULLY*. It is very veyr villain to trun off fsync option to off carelessly like me. It is not for accounting thing but it may show you where the problem is. Postgress has many ways for tuning it proper. I hope it helps you to find the problem.

        What is the disk configuration on your database server? A single 7200 rpm hard disk drive can on average commit only about 100 transactions per second.

Re: DBI and DBD::Pg: Assessing Performance
by tobyink (Canon) on Mar 15, 2012 at 09:12 UTC

    There may also be improvements you could make in your database structure to speed things up. For example, if you've got a lot of indices on your tables, then this speeds up reads at the expense of slower writes. So perhaps you could improve performance of writes by removing some unused indices.

    perl -E'sub Monkey::do{say$_,for@_,do{($monkey=[caller(0)]->[3])=~s{::}{ }and$monkey}}"Monkey say"->Monkey::do'
Re: DBI and DBD::Pg: Assessing Performance
by salva (Canon) on Mar 15, 2012 at 08:34 UTC