Am trying to understand why both of these variations take the same time to execute/complete...
The first piece of code runs a shell grep and writes the output into a file. The contents of the file is then read into a array using perl.
qx{cut -d"," -f17 $file | sort | uniq > $tmpfile}; open my $fh1, "<:encoding(utf-8)","$tmpfile" or die "$tmpfile: $!"; while (<$fh1>) { chomp; push @names, split (/\n/); } $fh1->close;
The second piece of code is way more easier to write and feels more direct (i.e., pass the output of the shell grep to an array - job done! However, this takes the same amount of time as the previous piece (time measured in seconds). I'd have thought that writing to a file and reading it back would have slowed me down, but it didn't! I'm currently running over a fairly small'ish set of data ($file = approx.100Mb, and $tmpfile = 50Kb). Is there a bias towards either of these approaches (driven by performance) should the dataset get significantly larger?
my @names = qx{cut -d"," -f17 $file | sort | uniq};
In reply to search/grep perl/*nix by Gtforce
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |