in reply to Re: Re: Merge Purge
in thread Merge Purge
I think with your DB_File approach, the biggest problem is one read/write for every record. I had a similar problem with a search index for 5,000,000 books. The thing took around 18 hours to finish. Taking advantage or sorting it and working with the current record cut it down to 17 minutes.
One thing I thought of (I don't know if someone else has done it. Couldn't find it at the time.) was to subclass tie DB_File and make a hash that wouldn't always read and write on every access. It would have an intermidiate cache. If you implemented caching behavior like this, it would probably speed it up an order of magnitude when the data was fairly sorted and still work about the same for the general case all while being nice and generic.
-Lee
"To be civilized is to deny one's nature."