in reply to Re: Index a file with pack for fast access
in thread Index a file with pack for fast access

I would very sincerely like to know, BrowserUK, why you advocate constructing an index, as it were, “manually,” instead of either (a) putting the data into (say...) a SQLite database file,

Firstly, I didn't advocate it. I simply supplied an answer to the OPs question.

But, there are several circumstances under which I might (and have) used it.

SQLite is obviously more powerful, but if you do not need that, it is of no benefit. And it comes at the expense of less flexibility unless you drive it from Perl, at which point you're into populating it via the DBI interface with the inherent slowdown.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

The start of some sanity?

Replies are listed 'Best First'.
Re^3: Index a file with pack for fast access
by locked_user sundialsvc4 (Abbot) on Dec 21, 2011 at 12:48 UTC

    Interesting.   (Upvoted.)   Of course it is understood that you are answering, not advocating, but I found the answer interesting and informative.

    As an aside, one characteristic of SQLite that “bit me bad” at first is the way that this system handles transactions.   Basically, you must have one, because if you don’t, SQLite will physically verify every single disk write by reading the information again.   Which certainly can result in the “hours or days” concern, and then rather dramatically relieve that concern.   I admit that I tend towards the use of SQL-based systems mainly so that I can subsequently run queries against them.   Perhaps I do not use hand-built searching techniques enough.   Thanks for your example.

      As an aside, one characteristic of SQLite that “bit me bad” at first is the way that this system handles transactions. Basically, you must have one, because if you don’t, SQLite will physically verify every single disk write by reading the information again. Which certainly can result in the “hours or days” concern, and then rather dramatically relieve that concern.

      Transactions aren't involved when using SQLite's bulk loader. The syntax is simply:

      CREATE TABLE onegb ( alpha varchar, no varchar, hex varchar, bin varch +ar ); .separator "," .import file table

      But if you do that alone on a csv file containing 16 million records, you'll wait days. Try it for yourself.

      And doing it via SQL inserts, even with transactions, will take even longer. Again, try it for yourself.

      However, if you precede the .import with the appropriate bunch of seven PRAGMA commands, then the entire import takes just over 2 minutes. But finding/working out/remembering those 7 pragmas is non-trivial.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

      The start of some sanity?

        http://erictheturtle.blogspot.com/2009/05/fastest-bulk-import-into-sqlite.html

        commands.txt

        .echo ON .read create_table_without_pk.sql PRAGMA cache_size = 400000; PRAGMA synchronous = OFF; PRAGMA journal_mode = OFF; PRAGMA locking_mode = EXCLUSIVE; PRAGMA count_changes = OFF; PRAGMA temp_store = MEMORY; PRAGMA auto_vacuum = NONE; .separator "\t" .import a_tab_seprated_table.txt mytable BEGIN; .read add_indexes.sql COMMIT; .exit

        sqlite3 mydb.db < commands.txt