Craig720 has asked for the wisdom of the Perl Monks concerning the following question:
These files contain several instances of delimited sections. I then loop through the scalar variable, searching each instance of the delimited section for a keyword.
I use the global search if($string =~ m/regex/gix) to test if a regex matches a string (scalar variable). If I get a hit, I process that section.
My problem is, the text files are getting even larger. Not consistently, but often enough that I must have a procedure for handling them. If the file is too bloody large, I can no longer simply read it into a scalar variable.
I have to replace my wonderful global search with..., well, I don't know what, yet. Any ideas?
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Processing LARGE text files
by CountOrlok (Friar) on Mar 07, 2006 at 18:03 UTC | |
by Craig720 (Initiate) on Mar 07, 2006 at 20:39 UTC | |
|
Re: Processing LARGE text files
by zentara (Cardinal) on Mar 07, 2006 at 18:16 UTC | |
by Craig720 (Initiate) on Mar 07, 2006 at 19:58 UTC | |
by zentara (Cardinal) on Mar 07, 2006 at 20:59 UTC | |
by Craig720 (Initiate) on Mar 08, 2006 at 14:40 UTC | |
|
Re: Processing LARGE text files
by QM (Parson) on Mar 07, 2006 at 18:20 UTC | |
by Craig720 (Initiate) on Mar 07, 2006 at 19:46 UTC | |
by thedoe (Monk) on Mar 07, 2006 at 21:19 UTC | |
by Craig720 (Initiate) on Mar 08, 2006 at 14:55 UTC |