Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Dear Monk's

I noticed that I come across the same problem over and over again

Repeating the same code isn't a good practice IMHO but How it could be eliminated in this example?

my @arr = ''; open(F,"x.txt") or die $!; while (<F>) { chomp; my $Field = (split/\,/)[1]; push(@arr,$Field); } close(F); print "Listing data for:" . (join '/', @arr) . "\n"; open(F,"x.txt") or die $!; while (<F>) { chomp; print join(' ' , split /,/) . "\n"; } close(F);

What's up there:

I thought about storing the entire data file in an array and use foreach, to stop opening/closing the files 2 times ... but will it be more efficient? What if I was using a database such like mysql? can I eliminate a duplicated query to return the data for both the title and the data table?

Your help/advice is greatly appreciated.

Replies are listed 'Best First'.
Re: How to eliminate duplicated code?
by graff (Chancellor) on May 22, 2009 at 00:52 UTC
    If the file is small, it makes no difference what you do. Don't worry about it. And it does seem like the file must be pretty small, since you are taking field 2 from every line, and concatenating all these fields on your first line of output.

    That said, given that the file is so small, it does make sense to read it only once and hold it all in memory for both uses:

    my @arr; # no need to initialize (and in any case, "= ''" makes littl +e sense) my @hdr; open( F, "x.txt" ) or die $!; while (<F>) { chomp; my @flds = split /,/; push @array, join( ' ', @flds ); push @hdr, $flds[1]; } print join( '/', @hdr ), "\n"; print join( "\n", @array ), "\n";
    (not tested, but should be close, if the OP code is really what you intended)