josh803316 has asked for the wisdom of the Perl Monks concerning the following question:

I have run into a problem in my web application where I create an entry in the database, then move to another area of the site and list the rows and my created entry is missing. If I use the command line mysql or query-browser I see that the entry was created....So the only thing I can currently guess is that this is caused by the mysql query cache. I'm creating the entry using transactions. I wondered if there is a best practices way of checking/clearing the cache when a new entry has been created.

The following is how the entries are created as well as the list_all attempt to grab the rows.
#********************************************************************* +******** # public create # ************* # Uses: Saves the given object into the database # Params: HASH with new object attributes # Return: The inserted object, or undef if the Object can't be inse +rted #********************************************************************* +******** sub create { my $class = shift; my ($param) = @_; my $dbh = Jnms::Database::Control::DatabaseManager->get_database-> +{dbh}; my $id; my $error; # check if the required fields are there and insert on the db #print STDERR Dumper($class,$param); $class->_check_required_create_fields($param); eval{ $id = $class->_insert_into_database($param); $dbh->commit; }; if ($@) { warn "Transaction aborted because $@"; $error = $@; # now rollback to undo the incomplete changes # but do it in an eval{} as it may also fail eval { $dbh->rollback }; die "Transaction aborted because of $error"; # add other application on-error-clean-up code here } #return the inserted object return $class->find($id); } #********************************************************************* +******** # public list_all # *************** # Uses: gets all Objects that matches the child class type from th +e db # Params: None. # Return: an ARRAY of all Objects of the requested type #********************************************************************* +******** sub list_all { my $class = shift; my ($param) = @_; my $dbh = Jnms::Database::Control::DatabaseManager->get_database-> +{dbh}; # get the constants specified on the child class my $columns = join(', ', $class->TABLE_COLUMNS); my $table = $class->TABLE_NAME; my $order = $class->LIST_ORDER; my @column_values = (); my $objects = $dbh->selectall_arrayref("SELECT $columns FROM $tabl +e ORDER BY $order", {Slice=>{}}, @column_values); bless($_, ref($class) || $class) foreach @$objects; my $depth = 0; if (ref $param eq 'HASH') { $depth = $param->{depth}; } elsif($param) { $depth = $param; } unless(defined $depth){ $depth = 0; } $objects = $class->_find_foreign_keys($objects, $depth); return $objects; }

Replies are listed 'Best First'.
Re: Mysql query cache and DBI
by graff (Chancellor) on Nov 07, 2009 at 07:17 UTC
    There's a DBI database handle attribute called "RowCacheSize", which can be set to "1" to "disable the local row cache" when calling connect(); I'm not familiar with Jnms::Database::Control::DatabaseManager, but if that is where the connect call is being made, that's where you'd want to check whether this parameter is being handled, and how you can control it.

    (I don't know if this will solve your problem, but it looks like a good thing to try. I'm guessing the default is "0", which will "Automatically determine a reasonable cache size for each SELECT".)

    UPDATE: The section in the DBI man page about RowCacheSize closes with:

    See also the "RowsInCache" statement handle attribute.

    There may be some situations where you want to control caching at the statement-handle level rather than the connection level. Having a row cache is usually a Good Thing, and disabling it indiscriminately might be a Bad Thing.

    (Oops! Another update: I just noticed that "RowsInCache" is "readonly". Oh well, no cache control at the statement level. Good luck...)

      So I tried setting the RowCacheSize => 1 to no avail, I'm still faced with the same issue. I'm not sure what step to take next.....
      $dbh = DBI->connect($database,$self->{user},$self->{pass},{ RowCacheSi +ze => 1}) or die DBI::errstr; $dbh->{AutoCommit} = 0; $dbh->{RaiseError} = 1;
        It turns out the problem is related to transactions and AutoCommit......I had to do a $dbh->commit; after my reads (selects) in order to update the transactions. Once I added the commit after the select for my list_all it fixed the issue.
Re: Mysql query cache and DBI
by bichonfrise74 (Vicar) on Nov 07, 2009 at 00:20 UTC
    When you use your code to insert the data and you do not see the data when you navigate to another part of your site, do you actually see the data when access your database from the command-line?

    If yes, then this has nothing to do with the query cache at all. In fact, MySQL query cache is only used for select queries. Either you have inserted your data or it was not inserted at all.

    If no, then there is a problem with the way you are inserting the data into your database.
      The data is inserted, I can see the values on the mysql command line (or via query browser). The problem is the subsequent list_all that is returning the entire set minus the newly created data. This is what led me to think it was a cache-ing issue.

      Also, the list_all is doing a select query so again this could very well be a cache-ing issue:
      Below code comes from list_all
      my $objects = $dbh->selectall_arrayref("SELECT $columns FROM $table OR +DER BY $order", {Slice=>{}}, @column_values);
        I realized that I should add more information to this post. The scope is different for each request. In the list_all request I'm querying a gearman (gearman.org) database worker daemon which is a long running process that keeps the $dhb in memory. In the create request it is via a different method and a new $dbh which means it must have something do with the cache of the long running process. How can I refresh/reset the cache?