cedance has asked for the wisdom of the Perl Monks concerning the following question:
I have quite a large file, appx. 4 GB. I am running on a cluster, so memory is not an issue. So, I went ahead and buffered the whole file on to a variable. I hope this is much better than reading line by line?
My task is this: From the second line of the file $i=1, I read from then on every 4th line, $i=1,5,9,13 etc... and then check for some patterns and do some operations depending on if the pattern was present on not, replacing strings etc...
Now, since mostly these operations are independent, that is; the pattern check on line 2 ($i = 1) could be done independent of line 6 ($i=5) and so on.. is it possible to create something like threads or do multiple checks at the same time? If its possible, then reading data in small chunks and assigning them to each thread would be a good idea; the number of threads would depend of course on the total memory available.
I hope my questions is clear, if not, please point it out and I'll try to clarify. Its just that I have the resources and I wonder if it couldn't be run other than in a total sequential manner.
Thank you!
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: buffering from a large file
by BrowserUk (Patriarch) on Mar 17, 2011 at 11:00 UTC | |
|
Re: buffering from a large file
by moritz (Cardinal) on Mar 17, 2011 at 11:01 UTC | |
|
Re: buffering from a large file
by chrestomanci (Priest) on Mar 17, 2011 at 11:34 UTC | |
by cedance (Novice) on Mar 17, 2011 at 11:45 UTC | |
by chrestomanci (Priest) on Mar 17, 2011 at 16:31 UTC | |
by cedance (Novice) on Mar 17, 2011 at 23:18 UTC | |
by chrestomanci (Priest) on Mar 18, 2011 at 09:55 UTC | |
|
Re: buffering from a large file
by JavaFan (Canon) on Mar 17, 2011 at 11:38 UTC | |
|
Re: buffering from a large file
by ikegami (Patriarch) on Mar 17, 2011 at 13:48 UTC | |
by cedance (Novice) on Mar 17, 2011 at 23:16 UTC |