http://qs1969.pair.com?node_id=1072715

Laurent_R has asked for the wisdom of the Perl Monks concerning the following question:

Dear fellow monks,

I have to process two large CSV files (about 6 GB each). No problem at getting the individual fields. In each record, one of the fields is a long string of alphanumerical characters with typically 150 to 300 such characters (the number of characters is always a multiple of 5). I need to split that string into groups of five characters, in order to then reorganize that string. As far as I can say, the split is not appropriate for that. I used a regular expression, something like this:

my @sub_fields = $field16 =~ /\w{5}/g; # ...
But the process is very slow and profiling the program shows that the line above takes far too much time. I intend to do some benchmark to try to find something faster. Maybe a faster regex can be found (for example /.{5}/g might be better. I will also try to use the substr function in a loop to see if that goes faster, but I would be very happy if some nice monk could come up with some other idea likely to bring higher performance.

Another idea that I had was to use the unpack function, but I do not use it often and I am not sure how to use it to produce an array from variable-length lines. Presumably, the template should be something like "A5A5A5...". Is there any way of saying something like: "A5" repeated as many times as possible (until the end of the string? Or do I have to use a different template for each possible string length?

I was also thinking on the possibility of opening a filehandle on a reference to the string and using the read function in a loop to populate an array of chunks of five characters, but I doubt that opening a filehandle for each record of my input will really improve performance.

Does anyone out there have another better idea for improving performance, so that I could include it in by benchmark?

Thank you for your help.

Update: I of course meant to say A5 for the unpack template, not A4 as I originally typed by mistake. Thanks to those who pointed out this typo. I corrected it above to be more consistent with the text.