Update: I got a negative vote for questioning the purpose of the OP's question. My bad. However, if the question is modified to reflect a more realistic scenario, there would be some interesting answers applicable in the real world.

I guess this is a Golf question?
I was working on a solution until I got to step 3 and realized that this requirement makes no sense.
It is so silly that I can't image a real world use for it!

A more real world thing might be having to write a humongous amount of data to a multiple CD data set where no single file can span a CD boundary. I saw this sort of requirement in the olden floppy disk days. When loading say 20 diskettes in a data set, you want to keep going even if diskette #5 has a fatal error. At the end, say 19 diskettes loaded and one didn't. Now we can get one diskette and patch the system with that single diskette in a straightforward way.

I have no idea of a practical use for this requirement.
Here is where I stopped:
BTW, I see no need to parse the .csv file. My gosh, I am unaware of any CSV file that is \n field delimited - what that would mean boggles my mind and would result in some confused display with a text editor.

#!/usr/bin/perl use strict; use warnings; # node: 11104399 use constant DIR => ""; #set these as needed... use constant N_FILES => 3; # step 1: List all .csv files in a directory by increasing order of fi +le size my @files_by_size = sort ($a -s <=> $b -s}<DIR/*.csv>; print join (@files_by_size,"\n"),"\n"; # step 2: Drop the first line of each file and concat the rest into a +single output file open OUT, '>', "BigFile" or die "...blah..$!"; foreach my $infile (@files_by_size) { open my $infile, '<', $infile or die "unable to open $infile $!"; <$infile>; #throw away first line of file print OUT while <$infile>; } close OUT; # $infile already closed... # step 3:Split the above output file into "n" smaller files # without breaking up the lines in the input files # # This is a strange requirement! A VERY strange requirement! # the obvious thing to do is to make n-1 "big" files and throw # what is leftover into the nth file (which will be very small) # # The tricky part here is to make sure that at least one line # winds up in the nth file. Now that I think about it... # # geez if n==3 and outsize = total bytes, # Create file1 and write lines >= total_bytes/2 to it. # write one line to file 2. # write the rest of lines to file 3. my $big_files = $n-1; # stopped at this point because this sounds like a Golf situation # with a very contrived situation and I'm not good at that. #step 4: this is easy

In reply to Re: Complex file manipulation challenge by Marshall
in thread Complex file manipulation challenge by jdporter

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.