in reply to Vertical split (ala cut -d:) of a file

As friedo said, Text::xSV will most definitely do the job, but if the separator character(s) is not found in the actual data (i.e. it is only used as a separator), I don't know if you're going to get much faster performance than doing a line-by-line split. It shouldn't be that time consuming, even on a large file (unless you're running on ancient hardware).

-b

  • Comment on Re: Vertical split (ala cut -d:) of a file

Replies are listed 'Best First'.
Re^2: Vertical split (ala cut -d:) of a file
by qhayaal (Beadle) on Jan 30, 2005 at 09:41 UTC
    Thanks friedo and bgreenlee for the replies. I will check the xSV. I was hoping there would be something that's analogous to split itself. Like:
    (@col_1, @col_2, @col_3) = quasi_split /:/, foo
    *sigh* The problem is I have *lots* of files and each with hundreds of lines. If there is no such feature, may be I can risk asking for a feature request? I don't the internals, so I am not sure if I would be blasted for such a request...

      Text::xSV will nearly certainly be slower than a straight split. “Hundreds of lines” alone doesn't even come close to stressing Perl though. How many files do you have? Do you only need specific fields? Adjacent or disparate ones?

      Makeshifts last the longest.

        A few thousand files (typically 3000-10000) need to be processed with each file containing 100-1000 lines. I need only specific feilds and they are disparte too. :(