in reply to InterBase MEMO/BLOB fields, data with size > 1MB

It looks like you have a bug to work around at least.

Some DBD::* modules define their own ways to handle large BLOBs. (Well, DBD::Sybase does.) It looks like DBD::InterBase does not.

If you're willing to compile from locally-modified source, with some poking around you're pretty likely to be able to find the 1 MB limit and change it to something more reasonable. That would move the problem, but you'd have to remember to patch it every time you install. With more work you might be able to remove the arbitrary restriction. In which case you could submit the patch back to the DBI (and possibly also the DBD::InterBase) folks.

If what you need is only a little bigger than 1 MB then you could always use Compress::Zlib to deflate data before storing it, and inflate afterwards.

Failing that, one way to work around the issue is to create a table where you store the data across several rows. You could then define a function call to take a large field and divide it into several pieces that you store under some ID. And a reverse function to fetch those rows back and reassemble them.

  • Comment on Re: InterBase MEMO/BLOB fields, data with size > 1MB

Replies are listed 'Best First'.
Re: Re: InterBase MEMO/BLOB fields, data with size > 1MB
by pet (Novice) on May 31, 2004 at 12:04 UTC

    Thanks for suggestion/comment; it was obvious that DBD::InterBase module has limit of exactly 1 MB while fetching data from the database; unfortunately I just can't find where that limit "lies" to change or even remove it! Other thing is that I prefer limit of 10MB instead!

    As I said, in my script(s) I use MEMO as well as BLOB field, and in this case data (in the BLOB field) is already zip-ed (so, there I fetch binary data):

    # . . . . . - rest of the code # here, NAME is CHAR(16) -not so important, while DATA is BLOB field! $sql = "SELECT NAME, DATA FROM DOCUMENTS WHERE DOCNAME='$something'"; # $something was previously defined! $sth = $dbh->prepare($sql) or die "Preparing: ", $dbh->errstr; $sth->execute or die "Executing: ", $sth->errstr; $ii = 1; $name = ""; # fetching the content while (@row = $sth->fetchrow_array()) { foreach (@row) { if ($ii%2 == 1) { # so it is DOKUMENT's NAME! $name = $_; } else { # so, this is DATA! # opening FILE HANDLER for writing the content of the BLOB field open (F, ">./$name"); binmode F; # saving into the file "$name"! print F "$_"; close F; $name = ""; }; $ii++; }; }; $sth->finish; $dbh->commit or warn $dbh->errstr; $dbh->disconnect or warn $dbh->errstr; # . . . . . - rest of the code

    so, I think that deflating and inflating is not a good idea. The same reason (binary data) is why I don't want to "break a large field into several rows" but prefer to change LIMIT from the default size of 1MB to 10MB.

    But, where and how can I fix that?

    Regards, Pet.

      I can't tell what you need to change without staring at sourcecode myself in detail and trying it.

      You're right that I'd missed the important detail that you already were compressing data. In that case then compressing again won't help significantly, compression is not magic. If it could have done better the first time around, then it should have...

      I don't understand your objection to splitting one large field across rows. The fact that it is binary data is OK, what you do is use substr to turn one string into several, and you insert them into a table with 3 fields, one an external ID so that you can join to other tables, one a sequence number so that you know what order to put the pieces back in, and the last one a data field. Then use join to join the pieces together.

      This strategy will work with arbitrary binary data to arbitrary size as long as your database handles binary data and your memory handles the string manipulations.

        Thanks for the effort, but as I said I prefer changing the buffer size than deflating/inflating data into new table.

        Btw, I got info from the creator of DBD-InterBase (Edwin Pratomo): it is hardcoded, so just change the buffer's size (in dbdimp.h) MAX_SAFE_BLOB_LENGTH from 1000000 to desired value and re-compile:

        . . . #define MAX_SAFE_BLOB_LENGTH (1000000) . . .
        so, I did and it worked!

        Regards, - Pet.