in reply to large file and processing efficiency

Populate a hash with the distinct existing ids, not a regex. Then use some variant of exists $existing{ $id } to determine if you want to ignore the line or not.

Addendum: And if you've got a really large number of IDs you might want to populate a hash-on-disk (using say DB_File or the like) rather than slurping them all into RAM. That also would allow you to keep the existing IDs around between runs (you insert new keys into it as you insert into the database) and save you having to extract the current active IDs from your RDBMS every run (but then you run the risk of getting out of sync with the RDBMS' contents and have data duplicated in more than one place, so keep that in mind).

Replies are listed 'Best First'.
Re^2: large file and processing efficiency
by chiragshukla (Initiate) on Dec 07, 2005 at 20:32 UTC
    Hello Fletch,

    Use of hashes cut down the process time very significantly. Hashes did the trick in 20 seconds!

    Thanks for the nice suggestion.

    Regards,
    Chirag Shukla.
Re^2: large file and processing efficiency
by chiragshukla (Initiate) on Dec 07, 2005 at 19:30 UTC
    Thanks Fletch, Roy,

    How did I forget hashes? Probably because I am new to Perl.

    Good point about hash-on-disk. Will try out different options and will let you know how things worked out. Roy's suggestion of the 'GrandFather' thread is very interesting too.

    Thanks for your suggestions.

    Regards,
    Chirag.