in reply to Best way to match a hash with large CSV file

A hash is a O(1) lookup data structure. Using SQL to lookup each of its keys in a large list is ludicrous.

Forget DBI & SQL.

Read the lines from the file one at a time and look up the appropriate field(s) in the hash.

The problem with this approach is that the CSV file (40MB) is loaded to DBI engine 5,000 times and takes hours to process.

Looking up 120,000 items in a hash containing 5000 keys just took me 0.035816 seconds.

$hash{ int rand( 120000 ) } = 1 for 1 .. 5000;; $t = time; exists $hash{ $_ } and ++$c for 1 .. 120000; print $c; printf "%.6f\n", time() - $t;; 4647 0.035816

You'd be lucky to get an error message from DBI in that time.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^2: Best way to match a hash with large CSV file
by alphavax (Initiate) on Nov 03, 2011 at 23:48 UTC

    Thanks a lot BrowserUk!!!

    As you suggested I read the csv file line by line and did a search directly in the hash and completed the task in 6 seconds!

    sub dataextractor($) { my $input = shift; my $output = shift; my $order = shift; my (@data); open(OUT,">".$output) or die "Can't open the file $output\n"; open(IN,"<".$input) or die "Can't open the file $input\n"; while (<IN>) { if ($. eq 1) { print OUT $_; next; } s/\r?\n$//; @data = split (',',$_); if ($data[1] eq $site_peakhr{$data[$order]}) { print OUT $_."\n"; } }#while close(IN); close(OUT); return($output); }