I'd have to hear more about your data structure to know the best way to optimize things but here are some possibilities based on my secret-decoder-ring knowledge of DBD::CSV (I am its maintainter and although what's below is in the docs, it may not be evident):
- If you have enough memory to hold two of the tables in memory at once, use DBD::CSV's explicit joins to search the tables. The joins are based on hashes so using them will gain some of the speed mentioned in other hashed-based responses to your question. Something like SELECT baz.foo,qux.bar FROM baz NATURAL INNER JOIN qux ... will automatically create hashes of the two tables and avoid multiple lookups.
- Or, if you are going to be searching on only a single unique id field, try using DBI::SQL::Nano as DBD::CSV's SQL engine. Nano comes with recent versions of DBI and, although much more limited in the SQL it supports, can be *much* faster for single-field queries. See the Nano docs for how to use it with DBD::CSV.
- As others have pointed out, you definitely want to avoid looping through the files more than once. That can be accomplished with DBD::CSV even without either of the two tricks above. To know what to recommend though, I'd need to see your actual table structure and have a better idea of what you're trying to do (for example, does the list of clients come from a query on the first table, or do you have the list of clients in advance?). Feel free to post more details if you want more specific suggestions.
- Even if you go with a hand-rolled hash solution, you may want to use DBD::CSV to build the hashes using DBI's selectall_hashref($x,$y,$z) or selectall_arrayref($x,$y,Slice=>{}) methods.
| [reply] |
This runs like a dog.
Any ideas on how can I speed things up?
A couple.
- Buy more memory/faster hardware :)
- Put the cvs files into a real database (MySQL,SQLite, ... not DVD::CSV)
Is there an easy/faster alternative to DBD::CSV?
I've never seen anything like it.
| MJD says "you can't just make shit up and expect the computer to know what you mean, retardo!" | | I run a Win32 PPM repository for perl 5.6.x and 5.8.x -- I take requests (README). | | ** The third rule of perl club is a statement of fact: pod is sexy. |
| [reply] |
Can you build a hash of clients in the search files and the rows they occur in. Use this to pull the data for each client you have to lookup. This way there is less sorting but the mem requirement would be high if the 30-60K client list is mostly unique clients (could be very optimal though if there are a lot of repeat clients).
Cheers, R.
| [reply] |
#Load the 5000 rec file into a hash based on some unique #field (like lastname + phone number).
foreach my $rec (@$client_recs) {
$client{$rec->{lastname} . $rec->{phone}} = $rec;
}
#loop through each big file once
#and check the client hash for a match
foreach my $rec (@$big_file) {
if($client{$rec->{lastname} . $rec->{phone}}) {
#we've got a potential match!
#now compare each individual field
if($client{$rec->{lastname}.$rec->{phone}}->{firstname} eq $rec->{firstname} && ...) {
#blah blah blah
}
}
}
#this way we're looping through the big files only once
#and doing a keyed in-memory search of the small file
#
#the other way around will loop through each big file
#5000 times.
| [reply] |
A good idea. I will let you know how I get in a weeks time
| [reply] |
I would suggest DBD::SQLite2 it is very fast and lightwight and can do your sort ...
| [reply] |