While using perl to make some DBI calls to a PostgreSQL database, I realized that many's the time when I select data from a view, or use a function/stored procedure. Naturally, the functions/SPs are in SQL or PL/pgSQL, and the selection and final presentation of the data is handled by perl. Now, with some new advances in PL/Perl, -- a perl-based procedural language for PostgreSQL -- I started to think about Where Things Are Done.

Correlated subqueries lend themselves nicely to being rewritten into functions/SPs using PL/pgSQL, rather than downloading messy blocks of data into memory only to be subsequently discarded.

Regular expression processing is loved by perl. So perl might be used for data validation -- implemented in PL/Perl as a trigger, thereby letting the database handle the atomicity of the transaction; rather than validating in perl, using DBI to insert, using PL/pgSQL to verify, and then possibly roll the whole thing back across a span of different languages and routines.

It appears that where things are done is becoming less of a restriction and more of a personal choice, based on a number of factors unique to the individual problem domain's specific solution criteria.

So, which method/s would you use?

And why?

Contemplatively,
-v.
"Perl. There is no substitute."
  • Comment on Perl, PostgreSQL, and Where Things Are Done

Replies are listed 'Best First'.
Re: Perl, PostgreSQL, and Where Things Are Done
by perrin (Chancellor) on Jan 20, 2005 at 03:47 UTC
    My main question would be "how stable is PL/Perl?" You need to be pretty sure something like is solid before you start using it in a major way on your database server.

    However, assuming it is solid at this point, the biggest downside seems to be the limited level of perl support. I don't see any support for running the debugger or using modules. That takes away a lot of the value of using perl.

    Clustering could be a downside too. When you run your code in a separate server like mod_perl, you can spread your code across 20 machines very simply, and harness the power of cheap hrdware. Having twenty databases in a cluster is a lot harder, although it is possible for some situations where a small amount of replication lag is not an issue.

    Even with these downsides, it still might be the perfect for certain kinds of database problems that normally require a lot of data to be transferred.

      My main question would be "how stable is PL/Perl?"

      It's been around for a while, if that's any guide to stability. Just for the record, Perl stored procedures for Pg was added by Mark Hollomon in 7.0, which was released 2000-05-08. That means its coming up to five years old.

      See also http://www.postgresql.org/docs/8.0/static/plperl-trusted.html for a bit more on using modules.

      - another intruder with the mooring in the heart of the Perl

Re: Perl, PostgreSQL, and Where Things Are Done
by rob_au (Abbot) on Jan 20, 2005 at 09:57 UTC

    I have actually done some work with PostgreSQL and PL/Perl about 18 months ago and found the implementation, whilst interesting, somewhat lacking with the performance of this coupling of PostgreSQL and Perl was less than stellar, with, if I remember correctly, the Perl interpreter being launched each time a stored function employing PL/SQL is called. Additionally, I seem to remembering having major issues with concurrency with DBD::SPI, which at the time didn't support transactions anyhow, negating the value of some of the database server-side processing which I was looking to implement with this arrangement.

    In short, it's an interesting arrangement, but if you want the integration of your database server to make it more of an "application" server, you would be better served by the exploring the server programming interface with C.

     

    perl -le "print unpack'N', pack'B32', '00000000000000000000001000000000'"

Re: Perl, PostgreSQL, and Where Things Are Done
by castaway (Parson) on Jan 20, 2005 at 11:41 UTC
    Do you mean you usually disgard much of the data selected via Perl/DBI, because you grabbed too much, and need to filter it afterwards? If so I think you can do plenty more (including subqueries), using normal SQL already.

    I assume PL/Perl is just 'another way to write SQL procedures'? Assuming its just as fast, it sounds interesting, but I'd just use it to write functions to be called in normal queries, if anything.

    C.

Re: Perl, PostgreSQL, and Where Things Are Done
by zby (Vicar) on Jan 20, 2005 at 10:07 UTC
    A side note - do you think PL/Perl can become something like mod_perl?
      A side note - do you think PL/Perl can become something like mod_perl?

      An interesting thought ... Based on my experience I would say in it's current form, probably not - This is because of the lack of any persistency in the execution of the embedded Perl interpreter and lack of any callback execution based upon server events outside of the specific execution of a stored procedure in a SQL query. This however could be an interesting direction for forward development of the PL/Perl interface.

       

      perl -le "print unpack'N', pack'B32', '00000000000000000000001000000000'"

        I wonder if you had a perl daemon listening on a unix socket and had the PL/Perl module farm out some evals if that would be faster. Does the embedded perl interperter have direct access to the datastore (some sort of shared memory) or are values 'passed in' (copied)?