in reply to Re: Re: InterBase MEMO/BLOB fields, data with size > 1MB
in thread InterBase MEMO/BLOB fields, data with size > 1MB

I can't tell what you need to change without staring at sourcecode myself in detail and trying it.

You're right that I'd missed the important detail that you already were compressing data. In that case then compressing again won't help significantly, compression is not magic. If it could have done better the first time around, then it should have...

I don't understand your objection to splitting one large field across rows. The fact that it is binary data is OK, what you do is use substr to turn one string into several, and you insert them into a table with 3 fields, one an external ID so that you can join to other tables, one a sequence number so that you know what order to put the pieces back in, and the last one a data field. Then use join to join the pieces together.

This strategy will work with arbitrary binary data to arbitrary size as long as your database handles binary data and your memory handles the string manipulations.

  • Comment on Re: Re: Re: InterBase MEMO/BLOB fields, data with size > 1MB

Replies are listed 'Best First'.
Re: Re: Re: Re: InterBase MEMO/BLOB fields, data with size > 1MB
by pet (Novice) on Jun 01, 2004 at 11:31 UTC

    Thanks for the effort, but as I said I prefer changing the buffer size than deflating/inflating data into new table.

    Btw, I got info from the creator of DBD-InterBase (Edwin Pratomo): it is hardcoded, so just change the buffer's size (in dbdimp.h) MAX_SAFE_BLOB_LENGTH from 1000000 to desired value and re-compile:

    . . . #define MAX_SAFE_BLOB_LENGTH (1000000) . . .
    so, I did and it worked!

    Regards, - Pet.