in reply to Speeding up large file processing

If you need to process the whole file then slurping it (reading the whole thing in one hit) is likely to be faster than reading a line at a time. On the other hand, if you can bail out early after reading a small portion of the file then that may save a heap of time.

It may help to take a look at Memoize for some easy to implement caching.

Hardware upgrades may help a little, but improving the algorithm can help a lot.

It may help to implement a database to cache some of the information alongside ImageFolio and circumvent some of the overhead that way.


Perl is Huffman encoded by design.