That really depends on a few factors:
- The size of the records being fetched
- The length of time it takes for the query to be run
- The amount of memory on the machine you are running the query
In general, the best solution is to do the query a single time, and pass around an arrayref to any sub-routines doing the checksums and computations. This works
well for smaller-ish query results, or intense queries that you don't want to subject your database server to very often
Of course, this could vary if you have a large recordset. I would not attempt this with 100,000 rows of
data, for example, unless you've got alot of memory and/or the machine is not being used by multiple simultaneous users.
With that said, it is possible to pass around a single
prepared $sth handle. All you have to remember is to pass it, AND your @bind_values around. For example:
#!/usr/bin/perl -wT
use strict;
use DBI;
#Global Database Connection string, set in Apache's httpd.conf
$ENV{DBI_DSN} ||= 'DBI:mysql(RaiseError=>1,ChopBlanks=>1,Taint=>1,Auto
+Commit=>1):;mysql_read_default_file=/etc/.my.cnf;database=MyDB';
my $statement = q{
SELECT *
FROM table
WHERE column = ?
};
my @bind_values = qw(value);
my $dbh = DBI->connect;
my $sth = $dbh->prepare($statement);
if(checksums($sth, \@bind_values)) {
my $results = computation($sth, \@bind_values);
#do something with the $results
}
$dbh->disconnect;
sub checksums {
my $sth = shift;
my $bind_values = shift;
$sth->execute( @$bind_values );
my $results = $sth->fetchall_arrayref({});
#validate the $results
return 1;
}
sub computation {
my $sth = shift;
my $bind_values = shift;
$sth->execute( @$bind_values );
my $results = $sth->fetchall_arrayref({});
#perform computation on the $results
return $results;
}
IMHO, you're playing with fire querying & validating the data one time, then querying it again a second time to fetch the results.
One reason that comes to mind is that in between the first and second query, there is the possibility that the $results could be modified by another user/process. In that case, the query results from the first pass are irrelevant, and the second pass could be hiding "bad data" which needs to be validated again.
Granted, this is remote in some applications, it still could occur, and something you need to keep in mind |