in reply to Re^4: Command Line Hash to print things in common between two files
in thread Command Line Hash to print things in common between two files

First, the assumption that a "more compact" Perl program will execute faster is not true. In fact the opposite is often true! The algorithm used will typically make far, far more difference.

Also aside from execution speed, Perl compiles at lightning speed and whether you have a "one liner" or 1,000 lines usually makes no real difference at all.

graff's cmpcol utility looks to be pretty flexible. If that critter does all you need, then I think we're done.

I see that the content of the OP (original post) has been restored. A few general comments on it related to performance:

1) In general, reading a line at a time and processing it right then works out better than slurping all the data into an array which is then later processed line by line anyway. You start out by essentially making a verbatim memory resident copy of both files. If they are big files, this alone will take noticeable time. Aside from the file I/O time, the construction (memory allocation) and copying of the data into the array takes time.

2) For every line the first file, you cycle through all of the lines in the second file. This can be very expensive execution time-wise! This is a: #lines(file 1) * #lines(file 2) situation.

3) Going back to re-process the same data again and again is "expensive". Perl split() is a nice critter, but this is not a "cheap" function. Every trip through the file2 data (of possibly many trips) requires this at each line.

4) To make your code faster, then general idea would be to "do something very significant" with each line read and to the extent possible, don't process the same data twice.

5) I would be thinking of making a data structure, an AoA or a hash table for the first file (not a simple "verbatim" copy of that file) which contains the "search or join term" and the complete line (for output). Cycle through file2 just once. At each line, decide if there is a match or not with some term in the file1 data structure. That way file2 is only processed one time.

6) One technique that is sometimes overlooked, is that with Perl you can build dynamic regex'es on the fly! You could build a single regex that describes all of the terms in file1 and run that regex against each sequential line of file2. my (@terms_found) =~ m/...huge regex.../g; Use the "quote regex", qr syntax.

7) Another technique that is sometimes overlooked is the use of system sort to simply the processing. If these are really big files, this idea may work out also.

The possibilities to fine tune the performance are not endless, but many. Some examples of your files as well as typical sizes would be very appropriate. I think if you implement to step(5) of the above, the performance increase will be noticeable. Again, split() is great, but it is not a "cheap" function in terms of CPU. If you just put file2 into a better structure and didn't run split() so often, that alone would increase performance.

  • Comment on Re^5: Command Line Hash to print things in common between two files