I am currently doing some contract programming for a printing company who receive large (anywhere from 15 Meg to 150 Meg) text files containing thousands of text statements. My scripts perform various operations on these statement files, the simplest pulling names/addresses (at specific line/positions per page) and outputting them into a different file.
The scripts are very straightforward (read until find page marker, take name/addr from line# X, dump to other file, carry on)... I installed a simple percentage counter to see the current progress (bytes read/total bytes in file).
My boggle is this:
As the programme progresses through the file, it slows down, until as it reaches the very end it appears (from when it started the file) to really be crawling. The script does complete its task correctly, nothing is freezing, but I'm still interested into why it slows down.
I use a read() to pull byte by byte (looking for top-of-page hex marker, since pages are all different lengths).
Nothing is stored in memory but the temporary name/addr that it pulls from the master file, and then outputs.
Any ideas? -- Alexander Widdlemouse undid his bellybutton and his bum dropped off --
In reply to Perl scripts slowing down towards end of file processing by JPaul
For: | Use: | ||
& | & | ||
< | < | ||
> | > | ||
[ | [ | ||
] | ] |