in reply to SQLite in memory database to scalar

It looks like it should be possible in theory to copy from one in-memory database to another (SQLite Online Backup API), but it doesn't seem to be implemented in DBD::SQLite. Also, since the Backup function apparently works by doing a live copy from one SQLite database into another, you'd essentially get a second in-memory database. However, I suspect that's actually not what you want when you say you want to get a "scalar" - do you actually want a binary representation of the database as a scalar? In that case, I suspect you'll have to write the database out to a file. (BTW, Perl supports in-memory files, see open, although it doesn't look like you can pass a filehandle to sqlite_backup_to_file)

Do you know that it is significantly slower to just read the file back in from the disk, have you tried it? What are the additional things you want to do with the copied database?

Replies are listed 'Best First'.
Re^2: SQLite in memory database to scalar
by Rodster001 (Pilgrim) on Mar 19, 2014 at 22:02 UTC
    Once the database is created and populated in memory, I want to compress it, and then print it to stdout (it's a response to an http/rest request). So, it never needs to touch the file system.

    As it is now, I have to write it out from SQLite, compress it, read it in, then print it to stdout. So, yes. If I can eliminate the writing to disk part, it would really speed things up (especially under heavy load).

      Well, aside from extending DBD::SQLite, how about writing the file to an in-memory filesystem, and using something like IO::Compress::Gzip to compress as you output the file instead of calling an external tool (as it sounds like you're doing now)?

      Also, in terms of speed, benchmarks speak much louder than words ;-)

        That is what I am trying to do.

        Can you provide an example of how that could be done (in-memory filesystem)? I tried to pass it a filehandle (which didn't work), then I looked at the code and that sqlite_backup_to_file is part of the sqlite c api. It takes a filename as an argument, opens it, writes to it, and closes it.

        True enough about benchmarks :) but I do know that file operations are slower than in-memory ones.
      Once the database is created and populated in memory, I want to compress it, and then print it to stdout (it's a response to an http/rest request).

      I don't think SQLite is a pretty good transport format, partly because it is binary, and partly because the file format may change (so your client needs to know different SQLite formats). Why don't you use a format that is directly readable, like JSON, XML or YAML? All of those formats won't change, can simply be stored in a perl scalar, and all of them can be compressed very well.

      Alexander

      --
      Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)