iangibson has asked for the wisdom of the Perl Monks concerning the following question:
Hi Monks,
I want to process several huge text files (millions of lines each and over 1,000 columns). What I want to do for each file is to copy the first nine columns into four separate new files, then starting from the tenth column I want to copy each column as a new column in one of the four new files, based on the column header (which file to copy to is determined by looking up the header in a separate ID file).
So I don't want to process the files by line, but rather by column. I imagined that a problem similar to this would be a fairly common task, but I have been searching for an appropriate method in vain. I've also looked at Tie::Handle::CSV and Text::CSV, but these modules seem to only process a file line-wise, not column-wise, which considering the size of my files would be quite inefficient and complex (once the column header is read, this is all the information necessary to determine where to copy the entire column to).
Any pointers as to where to look to get started, or working examples of a similar nature would be most appreciated.
|
---|
Replies are listed 'Best First'. | |
---|---|
Re: Processing files column-wise
by BrowserUk (Patriarch) on Feb 22, 2012 at 22:49 UTC | |
Re: Processing files column-wise
by aaron_baugher (Curate) on Feb 22, 2012 at 21:16 UTC | |
Re: Processing files column-wise
by JavaFan (Canon) on Feb 22, 2012 at 21:35 UTC | |
A reply falls below the community's threshold of quality. You may see it by logging in. |