Speed is going to depend heavily on the disk(s).
Using SQLite, what do you call slow? Could you please mention your harddisk type with the achieved inserts per second?
PostgreSQL might be a option, even if it isn't as delightfully simple as SQLite is.
For bulk load postgres has COPY, and I used it on a version of your program:
$ cat ./generate_words.pl #!/bin/env perl use strict; my $numofwords= 50_000_000; for (my $i=0; $i<$numofwords; ++$i) { my @chars = ( "A" .. "Z", "a" .. "z", 0 .. 9, qw(! @ $ % ^ & * +) ); my $rin = join("", @chars[ map { rand @chars } ( 1 .. 5 ) ] ) +; print $rin, "\n"; }
$ time perl ./generate_words.pl > words.txt; real 7m54.686s $ time ( < words.txt psql -d test -c " drop table if exists words; create table words (word text, id serial); copy words(word) from stdin csv; " ) real 1m16.223s
So, generating the data took 8 minutes, loading as 50M rows just 1 minute, adding an index another 9 minutes (not shown) (on a raid10, 8 disks, I think; just SATA).
In reply to Re: fast simple DB (sqlite?) skeleton? (PostgreSQL)
by erix
in thread fast simple DB (sqlite?) skeleton?
by iaw4
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |