in reply to Quickest way of reading in large files?
Which Yields#!/usr/bin/perl use strict; use Benchmark; sub line_by_line { open (IN, "/usr/share/dict/words"); while (<IN>){ if ($_ =~ /hello world/){ # do what ever } } close IN; } sub block { open (IN, "/usr/share/dict/words"); my @lines = <IN>; close (IN); foreach (@lines){ if ($_ =~ /hello world/){ # do what ever } } } timethese(100,{ 'line' => \&line_by_line, 'block' => \&block });
Benchmark: timing 100 iterations of block, line... block: 32 wallclock secs (32.52 usr + 0.36 sys = 32.88 CPU) @ 3 +.04/s (n=100) line: 19 wallclock secs (17.92 usr + 0.16 sys = 18.08 CPU) @ 5 +.53/s (n=100)
/\/\averick
perl -l -e "eval pack('h*','072796e6470272f2c5f2c5166756279636b672');"
Updated:Fixed regexp to be the same in both subs. Used question's code verbatim. Thanks lemming.
|
|---|