pboin has asked for the wisdom of the Perl Monks concerning the following question:

Ugg.. Long, painful day and night. Hope I'm not missing something stupid...

I've got a simple, four-column SQLite table. What's got me all bent out of shape is that it looks like fetchrow_hashref is behaving differently depending on how many rows (1 or >1) my executed $sth is holding. If there are multiple rows, a following UPDATE query that I'm building with a hash member won't 'prepare.' I know that prolly doesn't make any sense, but ...

I've done my best to bang together a working example of the problem. It's not exactly easy on the eyes, but I've taken the liberty of setting $dbh->trace(1) so you can see what I see. (I did some serious ripping-out-of-stuff to make the size managable.)

Thank you Monks. Code follows...

Apr 17 07:45 -- corrected a typo

Apr 19 04:45 Answer: As it turns out, there's a locking problem here. The first SELECT is still open and being accessed by the fetchrow_hashref(). But, the UPDATE statement was blowing up whenever it tried to execute and would have 'hit'. If I massaged the SQL so that the WHERE would have caused a write not to happen all was well.

So, to fix this, I used fetchall_hashref() and loaded the whole result set to hash. Then, undefined the $sth and moved on -- worked like a dream.

This is *not* how I read the FAQ. Also, it doesn't make sense to me why a read-only SELECT would lock the DB in the first place, but that's what's happening.

Emails sent to SQLite.org and to the DBD::SQLite author.

#!/usr/bin/perl use DBI; use DBD::SQLite; use strict; use warnings; my $dbname = 'upman.db'; my $dbh; open_db(); process_files(1); process_files(); exit; sub process_files { my $limit = shift; my $sql = "SELECT * FROM files"; if ($limit){$sql = $sql . " limit $limit";} my $sth = $dbh->prepare($sql) or die $dbh->errstr; $sth->execute() or die $dbh->errstr; while(my $row = $sth->fetchrow_hashref()){ my $sql = "UPDATE files SET uploadrc='x' " . "WHERE locname='" . $row->{locname} . "'"; my $sth = $dbh->prepare($sql) or die $dbh->errstr; $sth->execute() or die $dbh->errstr; } } sub open_db{ $dbh = DBI->connect("dbi:SQLite:dbname=$dbname") or die $DBI::errs +tr; while(<DATA>){ $dbh->do($_) or die $dbh->errstr; } $dbh->trace(1); } __DATA__ DROP TABLE files; CREATE TABLE files (locname primary key, size, uploadrc, rmtname); INSERT INTO files VALUES('dat/19980820/ACCT3445',1451,NULL,'D980820.XA +CCT3445'); INSERT INTO files VALUES('dat/19980820/ACCT3470',5961,NULL,'D980820.XA +CCT3470'); INSERT INTO files VALUES('dat/19980820/ACH0232',37723,NULL,'D980820.XA +CH0232'); INSERT INTO files VALUES('dat/19980820/ACH4498',5612,NULL,'D980820.XAC +H4498ACH'); INSERT INTO files VALUES('dat/19980821/TD7605G',181755,NULL,'D980821.X +TD7605GL'); INSERT INTO files VALUES('dat/19980821/TD7695ReportLegend',3389,NULL,' +D980821.XTD7695'); CREATE UNIQUE INDEX rmtname ON files(rmtname);

Replies are listed 'Best First'.
Re: Fetchrow_hashref behaves differently when >1 row in $sth?
by simonm (Vicar) on Apr 17, 2004 at 04:09 UTC
    Your sample code runs fine for me -- well, at least it did after I figured out I had to run it without the "drop table" line the first time... *grin*

    Are you using a recent version of DBD::SQLite?

      As recent as CPAN has, which is 0.31. Just to confirm, are you telling me that you *don't* get the following on the second sql update:
      <- prepare('UPDATE files SET uploadrc='x' WHERE locname='dat/19980 +820/ACCT3445'')= DBI::st=HASH(0x829e50c) at upman.pl line 30 !! ERROR: 1 undef <- execute= undef at upman.pl line 32
      Hmmm... I thought that posting 'working' broken code would prevent that. Thanks for taking the time to look though.
        Just to confirm, are you telling me that you *don't* get the following on the second sql update: ... !! ERROR: 1 undef

        Correct, there's no "ERROR" in the output, attached below.

        Since this sounds like a bug in your system, it sounds like time for a bug report to the author. In that bug report, the output of "perl -V" is likely to be helpful.

        Since the error is in the output of the database, odds are that the bug will have to get reported to the author of SQLite. For that it may be helpful to visit http://www.hwaci.com/sw/sqlite/ and see if you can generate the error without Perl in the mix. If you can then report it there directly instead.

        Yes, I know that all of this is a PITA to do. But having users be willing to go through the work of submitting good bug reports is part of what makes open source able to work. (And normally if you do this, the quality of the support that you get is quite impressive.)