in reply to Re: InterBase MEMO/BLOB fields, data with size > 1MB
in thread InterBase MEMO/BLOB fields, data with size > 1MB

Thanks for suggestion/comment; it was obvious that DBD::InterBase module has limit of exactly 1 MB while fetching data from the database; unfortunately I just can't find where that limit "lies" to change or even remove it! Other thing is that I prefer limit of 10MB instead!

As I said, in my script(s) I use MEMO as well as BLOB field, and in this case data (in the BLOB field) is already zip-ed (so, there I fetch binary data):

# . . . . . - rest of the code # here, NAME is CHAR(16) -not so important, while DATA is BLOB field! $sql = "SELECT NAME, DATA FROM DOCUMENTS WHERE DOCNAME='$something'"; # $something was previously defined! $sth = $dbh->prepare($sql) or die "Preparing: ", $dbh->errstr; $sth->execute or die "Executing: ", $sth->errstr; $ii = 1; $name = ""; # fetching the content while (@row = $sth->fetchrow_array()) { foreach (@row) { if ($ii%2 == 1) { # so it is DOKUMENT's NAME! $name = $_; } else { # so, this is DATA! # opening FILE HANDLER for writing the content of the BLOB field open (F, ">./$name"); binmode F; # saving into the file "$name"! print F "$_"; close F; $name = ""; }; $ii++; }; }; $sth->finish; $dbh->commit or warn $dbh->errstr; $dbh->disconnect or warn $dbh->errstr; # . . . . . - rest of the code

so, I think that deflating and inflating is not a good idea. The same reason (binary data) is why I don't want to "break a large field into several rows" but prefer to change LIMIT from the default size of 1MB to 10MB.

But, where and how can I fix that?

Regards, Pet.

Replies are listed 'Best First'.
Re: Re: Re: InterBase MEMO/BLOB fields, data with size > 1MB
by tilly (Archbishop) on Jun 01, 2004 at 01:32 UTC
    I can't tell what you need to change without staring at sourcecode myself in detail and trying it.

    You're right that I'd missed the important detail that you already were compressing data. In that case then compressing again won't help significantly, compression is not magic. If it could have done better the first time around, then it should have...

    I don't understand your objection to splitting one large field across rows. The fact that it is binary data is OK, what you do is use substr to turn one string into several, and you insert them into a table with 3 fields, one an external ID so that you can join to other tables, one a sequence number so that you know what order to put the pieces back in, and the last one a data field. Then use join to join the pieces together.

    This strategy will work with arbitrary binary data to arbitrary size as long as your database handles binary data and your memory handles the string manipulations.

      Thanks for the effort, but as I said I prefer changing the buffer size than deflating/inflating data into new table.

      Btw, I got info from the creator of DBD-InterBase (Edwin Pratomo): it is hardcoded, so just change the buffer's size (in dbdimp.h) MAX_SAFE_BLOB_LENGTH from 1000000 to desired value and re-compile:

      . . . #define MAX_SAFE_BLOB_LENGTH (1000000) . . .
      so, I did and it worked!

      Regards, - Pet.