File::Slurp says in its description: Efficient Reading / Writing of Complete Files.
I found that it is less efficient than simply opening / closeing a file the normal way:
use Benchmark qw(:all); use File::Slurp; my $file1 = 'file1.txt'; my $file2 = 'file2.txt'; my $count = 1000; cmpthese($count, { 'File1' => \&file1, 'File2' => \&file2, }); sub file1{ my @array = ( 1..100 ); foreach ( @array ){ open ( FILE1, ">>$file1" ); print FILE1 "$_\n"; close FILE1; } } sub file2{ my @array = ( 1..100 ); foreach ( @array ){ append_file( $file2, "$_\n" ); } }
Rate File2 File1 File2 3.24/s -- -8% File1 3.51/s 8% --
I noticed this when writing a log file using File::Slurp that seemed to be taking longer than when I didn't use it.

Replies are listed 'Best First'.
Re: File::Slurp Not As Efficient As OPEN / CLOSE
by kyle (Abbot) on Oct 27, 2008 at 16:27 UTC

    As I recall, 8% is around the margin of error for Benchmark. (See also No More Meaningless Benchmarks!)

    Also, testing the performance of this kind of operation can be heavily influenced by outside factors. Writing files is something the OS does. Are the files fragmented? Does it have to seek a long way from some file that some other program is accessing? How fast is the disk? How much of the operation is cached?

    If one of them is working better for you than the other, then go that route. I can't argue with that. That said, I don't put much faith in this particular test to show what it's meant to show.

    On my machine, I get more dramatic results:

    Rate File2 File1 File2 233/s -- -64% File1 645/s 177% --

    I wouldn't be surprised if some other monk comes along and shows results in the other direction.

Re: File::Slurp Not As Efficient As OPEN / CLOSE
by chromatic (Archbishop) on Oct 27, 2008 at 17:14 UTC

    You're doing IO. You're not going to get sane benchmark results until you have control of the size and contents of disk caches, when flushing occurs, when syncing occurs, and the position of write heads, not to mention other IO-related processes on your machine.

Re: File::Slurp Not As Efficient As OPEN / CLOSE
by perrin (Chancellor) on Oct 27, 2008 at 16:22 UTC
    There's not much it can do to improve such a small write. Try writing (and reading) 30K or so at a time and you should see an improvement.
Re: File::Slurp Not As Efficient As OPEN / CLOSE
by JavaFan (Canon) on Oct 27, 2008 at 16:47 UTC
    Instead of worrying about the small difference between both methods, I'd worry about the tiny throughput you have. Writing the numbers 1 to 100 each on their own line takes 292 bytes. You do 3.51 times a second. Which means you're writing about 1kb/sec.

    I'd focus on finding out why you can write only 1kb/sec instead of worrying about the 8% difference in speed. For the recond, on my box, 'File1' is much faster than 'File2', but my 'File2' more than 25 times faster than your 'File1':

    Rate File2 File1 File2 93.0/s -- -68% File1 294/s 216% --
    BTW, if I repeat the benchmark, the variation in results is more than the 8% difference you're measuring.
Re: File::Slurp Not As Efficient As OPEN / CLOSE
by zentara (Cardinal) on Oct 27, 2008 at 16:35 UTC
    I wonder if using File::Slurp's binmode would speed it up? Also of interest is slurping styles

    I always use this when I want speed

    #!/usr/bin/perl open (FH,"< slurp1"); read( FH, $buf, -s FH ); close FH; print "$buf\n";

    I'm not really a human, but I play one on earth Remember How Lucky You Are
Re: File::Slurp Not As Efficient As OPEN / CLOSE
by jeffa (Bishop) on Oct 27, 2008 at 18:45 UTC

    The bottom line for me is that one should have the foresight to know how to wrap such calls so that, when the time comes, they can opt to optimize the performance of said call without having to administer shotgun maintenance. That is -- how efficient will it be for you to make the change in the code?

    Consider this -- you can use File::Slurp in the early stages of your development and get on with "real work." If you use modules with the foresight that their internals will change, you should not have a hard time replacing the internal with a more efficient algorithm as long as you don't couple method calls too tightly.

    jeffa

    L-LL-L--L-LL-L--L-LL-L--
    -R--R-RR-R--R-RR-R--R-RR
    B--B--B--B--B--B--B--B--
    H---H---H---H---H---H---
    (the triplet paradiddle with high-hat)
    
Re: File::Slurp Not As Efficient As OPEN / CLOSE
by salva (Canon) on Oct 27, 2008 at 17:23 UTC
    File::Slurp says in its description: Efficient Reading / Writing of Complete Files

    Well, that affirmation could be just some propaganda from the module author.

    Anyway, writing lots of very small files, you are benchmarking an extreme, non representative, case. Looking at the module source code it seems that it introduces some overhead in order to support all its features, but that overhead would probably become insignificant when your files reach a more usual length.

      Well, that affirmation could be just some propaganda from the module author.

      <ot>
      I personally believe that funnily enough if you check his past posts there, you'll find that Uri, who's a clpmisc regular, is actually often "accused" of (being rude to n00bz, but that's a clpmisc thing as a whole, and of) advertising his own module all the time...
      </ot>

      --
      If you can't understand the incipit, then please check the IPB Campaign.
      Howdy!

      The description is cleverly vague on how that efficiency is to be measured.

      It could as well be programmer efficiency as runtime efficiency.

      yours,
      Michael
Re: File::Slurp Not As Efficient As OPEN / CLOSE
by ikegami (Patriarch) on Oct 27, 2008 at 18:24 UTC

    File::Slurp says in its description: Efficient Reading / Writing of Complete Files.

    And based on what you posted, it's true. It's barely slower than open/print/close with no error checking.

Re: File::Slurp Not As Efficient As OPEN / CLOSE
by ysth (Canon) on Oct 28, 2008 at 19:20 UTC