in reply to large file and processing efficiency
Populate a hash with the distinct existing ids, not a regex. Then use some variant of exists $existing{ $id } to determine if you want to ignore the line or not.
Addendum: And if you've got a really large number of IDs you might want to populate a hash-on-disk (using say DB_File or the like) rather than slurping them all into RAM. That also would allow you to keep the existing IDs around between runs (you insert new keys into it as you insert into the database) and save you having to extract the current active IDs from your RDBMS every run (but then you run the risk of getting out of sync with the RDBMS' contents and have data duplicated in more than one place, so keep that in mind).
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: large file and processing efficiency
by chiragshukla (Initiate) on Dec 07, 2005 at 20:32 UTC | |
|
Re^2: large file and processing efficiency
by chiragshukla (Initiate) on Dec 07, 2005 at 19:30 UTC |