in reply to Re: More efficient munging if infile very large
in thread More efficient munging if infile very large

I would implement the algorithm in nearly the same way, but I think I would implement it as a filter-like command which you can easily call in pipe sequences. So read-in with the magic <> (this allows for STDIN or infiles) and print to STDOUT (which can be easily redirected into outfile from the command line). This has the additional advantage of shortening the program a bit:

#!/usr/bin/perl -w use strict; my $uclc = shift or Usage(); # fixed small bug Usage() unless ($uclc eq 'lc' or $uclc eq 'uc'); # to speed up the loop (no string compare on every line) my $lc = $uclc eq 'lc'; while (<>) { print $lc ? lc : uc; } sub Usage { die << EO_USE; Usage: uclc.pl (lc|uc) [infile] reads from STDIN if no infile given EO_USE }

Update: Oops, thanks tilly, fixed: while instead of foreach.

-- Hofmator

Replies are listed 'Best First'.
Re (tilly) 3: More efficient munging if infile very large
by tilly (Archbishop) on Jul 26, 2001 at 16:16 UTC
    Change the foreach to a while.

    The difference is that foreach puts the file read into list context so the entire file has to be held in memory. The while prints as it reads and so will work much better on large files.

    Otherwise I like your changes.