in reply to profiling help
If your files are all of a reasonable size, upto a few 10's of megs for example-- then rather than reading each file line by line, you could probably improve the performance by slurping the whole file (see perlvar:$/).
If some or all of you files are to big to read into memory in one go, then you could try using the sliding buffer technique I posted at Re: speed up one-line "sort|uniq -c" perl code which reads the file in specifiable large chunks and takes care of ensuring that it starts each new search from a newline each time so as not to miss matches that might get split across reads. This seems to have a fairly substancial performance benefit over reading line-by-line at the cost of a little extra complexity.
|
|---|