in reply to search/grep perl/*nix
I'd have thought that writing to a file and reading it back would have slowed me down, but it didn't!
There's only a difference in the filehandle types involved. In the first, the shell opens/closes $tmpfile, in the second, it opens/closes a pipe attached to the perl side pipe filehandle created by qx (which perl creates anyways), so it is no surprise there is no difference, specially if you are working with a SSD instead of an old washing machine type of disk drums (modern disks might hold the entire file in the controller cache, so perl can read the file even before it is allocated physically via magnetism).
It would be more interesting to benchmark the shell chain against a pure perl solution, in which case perl loses here. Why? Because allocating the necessary data structures in perl means some overhead, whereas the cut uniq sort utilities deal only with char arrays[1], are seasoned and thus optimized for their specific tasks.
Here's a file of ~132MB, one million records, created with
$ perl -E 'say join",",map{int rand 1000000} 1..20 for 1..1000000' > s +ample.csv
and a quick shot at timing:
$ time cut -d"," -f 17 sample.csv | sort | uniq > out real 0m4.391s user 0m4.788s sys 0m0.060s $ time perl -F, -E '$s{$F[16]}++ }{ say for sort keys %s' sample.csv > + out real 0m6.716s user 0m6.668s sys 0m0.048s
This could make a difference with huge files. I haven't looked at the memory footprint, which might be another clue for deciding for or against a (dogmatic) "pure perl solution".
The bias is always qw(laziness impatience hubris) in an order that fits best.
[1] afaik those utilities are UTF-8 agnostic
|
|---|