in reply to Re: Re: Re: Substituting Newline Characters
in thread Substituting Newline Characters

I too am confused what the point might be, especially since your function does more than the CGI.pm ... but do correct me if i am wrong here -- i was under the assumption that memory and speed were so cheap these days that using a mere 5000 lines is not really that bad after all. Back when i was a Comp Sci undergrad, a peer who majored in Industrial Engineering explained to me that there was no need to worry about optimization since hardware was making leaps and bounds. I, of course, scoffed at that, and i still believe that someone had damn better well keep the optimization torch burning because hardware does have a limit ... but the truth is that we are only talking about a few extra seconds at best by using CGI.pm. I just ran Devel::Profile on two scripts, one that imported CGI.pm's escapeHTML and one that used yours. Here are the results:

Milleage will vary, but unless i am missing something, that's a whoping extra .05 of a second. But at this point, i would use your subroutine because ... well, there it is now isn't. What was that saying ... yes, don't go looking a gift horse in the mouth. That or don't complain when a saint posts code that works, is fast, and is free. ;)

jeffa

L-LL-L--L-LL-L--L-LL-L--
-R--R-RR-R--R-RR-R--R-RR
B--B--B--B--B--B--B--B--
H---H---H---H---H---H---
(the triplet paradiddle with high-hat)

Replies are listed 'Best First'.
Re: 4Re: Substituting Newline Characters
by tachyon (Chancellor) on Mar 16, 2004 at 03:09 UTC

    We do stuff dealing with billions of records so those little bits add up. Anyway it benches 4 times faster as you might expect. It is a tribute to p5p that it is only 4 times slower. I have actually defactored some code recently for example. We have a merge and an unmerge function, very similar code so I added a flag and a couple of if clasues so the unmerge was just another call to merge with the flag set. You know the usual stuff. But those two extra ifs every loop added 30% to the runtime - for both functions. So I had less code, although it was more complex but killed the runtime.

    In my version of the real world my fixed costs are servers and bandwidth. The more efficient I can make my code in terms of memory use and throughput the more clients we can shoehorn onto a single server which directly hits the bottom line. Compact functions are also easier to unit test which helps stability. As always YMMV. Whatever works for you is all you need.

    cheers

    tachyon

      Cases like your merge/unmerge are really what a macro system is for. Failing that (which Perl does, up until Perl 6), you can use either eval (in simple cases) or a templating engine (in not so simple cases) to generate code for similar things from a common source and still do it Once And Only Once.

      Makeshifts last the longest.

        I was suprised by the performance hit. In essence the only change was 2x if/else cases added

        sub merge { my ( $data, $unmerge ) = @_; while ( $some_condition ) { # big loop, no embedded loops $count = $unmerge ? $count1 + $count2 : $count1 - $count 2; } }

        As you say a preprocessor of some description could optimise the flag away, depending on the call. eval might well have allowed perl to optimise the code getting rid of the repeated if checks that will either always be true or always be false. But by the time I had added an AUTOLOAD the code would have been a lot less transparent and physically almost as long. I had not thought about using eval to delay compilation and effectively allow you to stub the functions and get them to compile more efficiently. I may give it a benchmark when I have a chance.

        cheers

        tachyon