in reply to Re: Most efficient MS SQL connection
in thread Most efficient MS SQL connection

The vast bulk of the logic is performed on the web front end (we have just recently migrated from a single server to a split architecture). Even with a fairly simple data retrieval and display (just grab rows from tables with no more than a single join and dump to the screen with almost no front end logic), it takes a disproportionate amount of time to retrieve results. I will agree that the larger performance hit comes with complexity of statements vice complexity of post processing logic. We are getting ready to deploy the new architecture to production shortly (finishing certification process), so I will have better 'real world' performance stats soon.

Replies are listed 'Best First'.
Re^3: Most efficient MS SQL connection
by dHarry (Abbot) on Mar 03, 2010 at 15:00 UTC

    Interesting, even when "there is not much to do" it appears to be slow. I think you have to do some measurements to find out where exactly the bottleneck is. If you execute the query from a SQL client does it take a long time too? One thing I can think of is the overhead introduced by initiating a connection. Does the application initiates a lot of (new) connections or does it reuse connections, i.e. uses connection pooling?

Re^3: Most efficient MS SQL connection
by mje (Curate) on Mar 03, 2010 at 16:03 UTC
    Even with a fairly simple data retrieval and display (just grab rows from tables with no more than a single join and dump to the screen with almost no front end logic), it takes a disproportionate amount of time to retrieve results.

    You seem to be shying away from giving us an example of what you call proportionately slow. Dumping to the screen can take a long time as you are performing IO on a terminal and scrolling etc. Here is a concrete example to MS SQL Server via an ODBC driver on a remote machine (to the database) connected through a router and 100M ethernet. The machine running MS SQL Server is running SQL Server express and it was dumped even as a desktop it is so old and slow (would you call this slow):

    use DBI; use strict; use warnings; use Benchmark::Timer; use Data::Dumper; my $t = Benchmark::Timer->new; $t->start('main'); my $h = DBI->connect; if ($ARGV[0]) { print "Recreating table\n"; eval {$h->do(q/drop table mje/);}; $h->do(q/create table mje (a varchar(50), b varchar(50), c varchar +(50), d varchar(50))/); $h->begin_work; my $s = $h->prepare(q/insert into mje values(?,?,?,?)/); my $a = 'a' x 50; my $b = 'b' x 50; my $c = 'c' x 50; my $d = 'd' x 50; foreach (1..50000) { $s->execute($a, $b, $c, $d); } $h->commit; } $t->stop('main'); $t->start('fetch'); my $r = $h->selectall_arrayref(q/select * from mje/); $t->stop('fetch'); $t->start('dump'); print Dumper($r); $t->stop('dump'); print "Rows fetched:", scalar(@$r), "\n"; print $t->reports;

    which outputs

    many many lines like the following [ 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa', 'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb', 'cccccccccccccccccccccccccccccccccccccccccccccccccc', 'dddddddddddddddddddddddddddddddddddddddddddddddddd' ] Rows fetched:50000 main1 trial of main (44.867ms total) fetch1 trial of fetch (1.715s total) dump1 trial of dump (31.614s total)

    I only keep harping on about this because I'm not sure you really have identified what is slow and you have not given us any detail to decide.