in reply to Best way to match a hash with large CSV file
A hash is a O(1) lookup data structure. Using SQL to lookup each of its keys in a large list is ludicrous.
Forget DBI & SQL.
Read the lines from the file one at a time and look up the appropriate field(s) in the hash.
The problem with this approach is that the CSV file (40MB) is loaded to DBI engine 5,000 times and takes hours to process.
Looking up 120,000 items in a hash containing 5000 keys just took me 0.035816 seconds.
$hash{ int rand( 120000 ) } = 1 for 1 .. 5000;; $t = time; exists $hash{ $_ } and ++$c for 1 .. 120000; print $c; printf "%.6f\n", time() - $t;; 4647 0.035816
You'd be lucky to get an error message from DBI in that time.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Best way to match a hash with large CSV file
by alphavax (Initiate) on Nov 03, 2011 at 23:48 UTC |