in reply to SQLite database (or any) file in main memory!

As others have said, there's :memory: (though this suffers the deficiency of not allowing more than one connection and also makes persisting a pain), or you can create a virtual disk in memory (Refer to your operating system's documentation on how to do this, on Linux you could start with

mount -t tmpfs -o size=<size>m none <mountpoint>

In practice though I'd suggest you thoroughly benchmark any such solution (and benchmark it on your production system or a system which resembles it as closely as possible). You may find that you can achieve almost the same speed by intelligently designing your queries. The main speed hit SQLite takes is when it opens and closes the file at the beginning and end of a transaction. So, if you can perform all your work in a single transaction, rather than using a transaction for every query, you'll be almost as fast on a disk-based db as on a RAM-based one. For example the following quick-and-dirty benchmark:

#!/usr/bin/perl use warnings; use strict; use DBI; use Benchmark 'cmpthese'; my %db=( "on disk" => "/tmp/sqlitetest", "in mem" => ":memory:", "ramdisk" => "/mnt/ramfs/sqlitetest" ); my $dbh; for my $current_db (keys %db) { print "## Testing backing store $current_db: ##\n"; $dbh = DBI->connect("dbi:SQLite:dbname=$db{$current_db}","","") or die "Couldn't create $current_db db"; $dbh->do(qq{create table test ( id integer, text varchar, nuthertext + varchar)}) or die $dbh->errstr(); for my $queries (10,100,300) { print "$queries actions per connection:\n"; cmpthese (-5, { "no transact." => sub { my $sth = $dbh->prepare(q(insert into test values(?,?,?))); for my $i(1..$queries) { $sth->execute($i,'No business','like perl business'); my @row_ary = $dbh->selectrow_array('select count(*) from test'); } $sth->finish(); $dbh->do(qq{delete from test}); }, "transact." => sub { $dbh->begin_work(); my $sth = $dbh->prepare(q(insert into test values(?,?,?) +)); for my $i(1..$queries) { $sth->execute($i,'No business','like perl business'); my @row_ary = $dbh->selectrow_array('select count(*) from test') +; } $sth->finish(); $dbh->do(qq{delete from test}); $dbh->commit(); } } ); } $dbh->disconnect(); unlink $db{$current_db}; }
On my system this gives:
## Testing backing store ramdisk: ## 10 actions per connection: Rate no transact. transact. no transact. 372/s -- -37% transact. 586/s 58% -- 100 actions per connection: Rate no transact. transact. no transact. 37.1/s -- -44% transact. 66.5/s 79% -- 300 actions per connection: Rate no transact. transact. no transact. 11.4/s -- -44% transact. 20.3/s 78% -- ## Testing backing store in mem: ## 10 actions per connection: Rate transact. no transact. transact. 621/s -- -1% no transact. 630/s 1% -- 100 actions per connection: Rate no transact. transact. no transact. 66.1/s -- -0% transact. 66.4/s 0% -- 300 actions per connection: Rate no transact. transact. no transact. 20.2/s -- -0% transact. 20.2/s 0% -- ## Testing backing store on disk: ## 10 actions per connection: Rate no transact. transact. no transact. 42.5/s -- -84% transact. 271/s 538% -- 100 actions per connection: Rate no transact. transact. no transact. 4.65/s -- -92% transact. 59.4/s 1177% -- 300 actions per connection: Rate no transact. transact. no transact. 1.58/s -- -92% transact. 19.6/s 1141% --
As you can see, the more actions you can cram into a single transaction, the less the performance difference between memory-based and disk-based becomes, up to the point of being negligible (I'd expect this to become even more pronounced the larger the database grows). Other caveats apply as well, e.g. if you consume too much memory your machine will go into swap. That's why it's important to benchmark.