I saw how peoples write their codes and don't think much about how Perl will process this. Some times I don't do this too ;-P, but here are some good tips that I want to share.

One interesting thing is how if,elsif,esle are executed. Take a look in this code:

if ( &testA || &testB ) { print "IF\n" ;} sub testA { print "AAA\n" ; return( 1 ) ;} sub testB { print "BBB\n" ; return( 1 ) ;}
You can see that only the first sub (testA) is executed. (try to change || to &&). This show that in the conditions of if|elsif is good to put first the most simple codes. For example, if you have:
if ( -s $file || $file =~ /\.new$/ || $file eq 'foo ') { ;}
Was not write in the best way! First, the -s need to be the last, because we can avoid the query in the HD. the eq need to be the first, since is more easy to process than a regexp.

We need to know how our script is executed too. First Perl get your file, read it byte by byte and parse it, optimize your code and create the bytecode, and than the bytecode is executed.

The optimize stage is very interesting. In this point numbers will be calculated, for example, if your have 1024*8, will be converted to 8192, in other words, write 1024*8 or 8192 inside a loop that will be runned 1000 times don't change the speed of the script.
Null blocks, like if (0) {} is cuted off from the code. One good thing is to test your code with a deparse rotine to see what Perl do:

use B::Deparse; my $deparse = B::Deparse->new("-s"); $body = $deparse->coderef2text(\&func); print "$body\n" ; sub func { if (0) { print "null\n" ;} print "aaa \" $var bbb\n" ; return( 'foo' ) ; }

Other thing is varialbes. Always try to use them in this order SCALAR , ARRAY & HASH. You will save memory and CPU. Other thing is how to work with them, for example:

$var = "foo" ; $var = "$var,bar" ;
This is wrong, because you are rewriting all the varialbe to just append the ",bar" in the end! The right is:
$var = "foo" ; $var .= ",bar" ;
This will just append the new data in the end. You can do the same thing for other operators:
$var += 10 ; # for $var = $var + 10 ; $var -= 10 ; # for $var = $var - 10 ; $var /= 10 ; # ... $var *= 10 ; # ...
Now how to insert data in the begin? Most peoples do:
$var = "foo$var" ;
But the best way is:
substr($var,0,0) = "foo" ;
This not rewrite all the variable!

Other thing is the "my". Use it! (use strict too, at least in the development stage). "my" is very good, 1st you don't get conflict with variables with the same name, and it cleans the memory after use the variable (when you go out of a block). But if you have a loop that will be runned many times don't do:

while(1){ my $var = &foo() ; ... }
If you can do:
my $var ; while(1){ $var = &foo() ; ... }
Why this? Well, if you put the my inside, you set the my, write the variable, run the code, than clean the variable to recreate it in the next loop. If you put outside, you create it only 1 time, than only write to the variable, and this use less CPU. And don't forget, generally we use a lot of variables inside a loop, and is in this point that make difference.

Now how to paste variables. If you have a big variable, let's say with the content of a file, and you need to paste it to a sub. Paste the reference, or you will use more memory duplicating the varialbe inside the loop:

$data = 'foo' x (1024*1024) ; &test(\$data) ; sub test { my ( $datarf ) = @_ ; if ( $$datarf =~ /foo/s ) { print "bar\n" ;} };

But to use references you need to know how to work with them! to paste use:

$rf_s = \$scalar ; $rf_a = \@array ; $rf_h = \%hash ;
To access:
$$rf_s .= 'foo' ; push(@$rf_a , "BAR") ; $$rf_a[0] = 'bar' ; $$rf_h{b} = 'foo' ; foreach my $Key ( keys %{$rf_h} ) { my $Value = $$rf_h{$Key} ; print "$Key = $Value\n" ; }

About files, 1st, if you are using files like a database, see DBD::CSV and DBD::SQLite, they are very good for db files. But if you want to read don't use <FOO>, specially for big files:

open (FLHD,"$0") ; my $fl_data = join('' , <FLHD>) ;
<FOO> will read the file, but will look for the \n sinde it too, and this is slow! Use this:
open (FLHD,"$0") ; my $fl_data ; 1 while( sysread(FLHD, $fl_data , 1024*8 , length($fl_data) ) ) ;

Other thing, if you want to process the lines, don't cat all the file inside a @array, to after this process them! Is better to read and process, this is faster and use less memory:

open (FLHD,"$0") ; while (my $line = <FLHD>) { my ($x , $y , $z) = split("::" , $line) ; }

And to finsih, a funy way to clone (link) the access to variables:

$foo = '123' ; $bar = undef ; *bar = \$foo ; print "bar: $bar\n" ; $bar = 102030 ; print "foo: $foo\n" ;

And you can do this for any type:

# To have $bar , @bar and %bar from *foo ; *bar = \$foo ; *bar = \@foo ; *bar = \%foo ; # glob: (but doesn't work for tied handles!) *bar = *foo ; # For packages: *bar:: = *foo:: ; # you have now main::bar linked to main::foo

But always remmember, only spent time optimizing your code in areas that will be executed a lot of times, specially inside loops.

And don't make code optimization a jail. Perl is wonderful because we are free to do our code how we want, with OO or not, in packages or not, with my, strict... And this is very good, because let us free to be creative! Don't falls in the idea that codes with a lot of rules to make it understandable by others is very good. Weel, don't make a trash, but codes like Java (with all the respect) make us to spent much time following the rules, and we don't have time and dynamism to be creative to make codes with good ideas! And don't forget, the final objective is important, not only the theory or ilusion that much peoples create! Like Java, that have a very good idea, portable codes and good OO, but if it's very slow, we lost the objective, make a program that works!

We can see now a lot of work to make rules for codes. To get companys certified. To follow the herd of buffalos. Well code rules are good, but we can't cut the creativity! Make rules only to can reuse the code by others, and treat peoples like machines, that can be replaced, that will make the things always in the same way, is not the right! Don't buy this idea! Id don't do this for who works for me, because I don't want this for me!

The last tip, each time is different of other, don't think that the same recipe is for all the time. You always need to see where you are, what you have, and who are with you. When you gonna make the rules of a project, let the programmers do this! Only them know what they know, what resources they can use or like to use, and the group will work very well!

Like a famous song in Bra(s|z)il ("Lulu Santos - Como Uma Onda.mp3"):

ENGLISH Nothing that was will be Again in the same way like was one day. Everything goes, Everything will always go. The life comes in waves Like a sea, In a infinity coming and going. Everything that it is seen is not Equal than what we saw one second ago, Everything change all the time In the world. Don't try to run away Even lie for your self, now, Exist so much life out there, Inside here, always, Like a wave in the sea. PORTUGUESE: Nada do que foi será De novo do jeito que já foi um dia. Tudo passa, Tudo sempre passará. A vida vem em ondas Como um mar, Num indo e vindo infinito. Tudo que se vê não é Igual ao que a gente viu há um segundo, Tudo muda o tempo todo No mundo. Não adianta fugir Nem mentir pra si mesmo, agora, Há tanta vida lá fora, Aqui dentro, sempre, Como uma onda no mar.

Enjoy. Send your feedback, and other tips! ;-P

Graciliano M. P.
"The creativity is the expression of the liberty".

Replies are listed 'Best First'.
Re: How Perl Optimize your code & some code TIPS ;-P
by gjb (Vicar) on Jan 24, 2003 at 10:43 UTC

    Just a caveat to your tip on the order of tests in conditional statements.

    While what you say is true for an if that is executed only once, things get a bit more subtle if the conditional statement is part of the body of a loop.

    In that case, the most expensive test is not so obvious to determine, since it could be the case that a cheaper one almost always succeeds so that the more expensive one is called almost all the time anyway, hence slowing execution since the cheaper test wasn't the determining factor but got executed anyway.

    In such a situation only careful benchmarking will yield the correct order since that is a balance between the number of times the tests fail/succeed and the amount of time they take to compuate.

    The same holds true for tests in iterations such as the while () and the for ().

    By way of example, consider the following loop:

    foreach my $file (@dir) { if ($file eq '.' || $file eq '..' || $file =~ /\.bak$/) { ... } }
    Obviously the first two tests are cheaper than the last one, but on the other hand, they'll each only succeed once, while the last test may succeed many time. Or, conversely, the last test will be executed very often, regardless of the first two tests.

    Just my 2 cents, -gjb-

      An extremly good but often forgotten point. On this topic, it should be pointed out that any results will vary wildly depending on input data in nearly every case. In order to benchmark meaningfully then, you have to be very careful about the characteristics of your sample set - subtle properties you may not even be aware of could easily cause significant variation.

      Makeshifts last the longest.

Re: How Perl Optimize your code & some code TIPS ;-P
by Aristotle (Chancellor) on Jan 24, 2003 at 10:12 UTC
    Well, if you put the my inside, you set the my, write the variable, run the code, than clean the variable to recreate it in the next loop. If you put outside, you create it only 1 time, than only write to the variable, and this use less CPU.
    Depends — oftentimes I need the variable to be undefined at the start of the loop. In that case, I would have to do
    my $var; while( $foo ) { # ... undef $var; }
    Also, I try to scope very tightly for reasons beyond memory reuse — if you declare a variable inside a block, it's obvious that it's meant strictly for use inside that block. That would be
    { my $var; while( $foo ) { # ... } }

    Both workarounds are kludgy and make the code harder to read — and easy to read code is my #2 priority. Efficiency is only my #3 or #4 priority — unless I really need the speed, and have profiled and benchmarked my code and identified the piece of code in question as the hotspot. (#1 is correctly working code, if you were wondering.)

    This does not of course affect advice such as smart ordering of conditions as that only rarely makes a difference with readability.

    Makeshifts last the longest.

Re: How Perl Optimize your code & some code TIPS ;-P
by IlyaM (Parson) on Jan 24, 2003 at 11:20 UTC
    Now how to paste variables. If you have a big variable, let's say with the content of a file, and you need to paste it to a sub. Paste the reference, or you will use more memory duplicating the varialbe inside the loop

    Another approach is using variable aliasing to @_ array in subroutine calls:

    $data = 'foo' x (1024*1024) ; test($data) ; sub test { if ( $_[0] =~ /foo/s ) { print "bar\n" ;} };
    This is memory efficient too.

    --
    Ilya Martynov, ilya@iponweb.net
    CTO IPonWEB (UK) Ltd
    Quality Perl Programming and Unix Support UK managed @ offshore prices - http://www.iponweb.net
    Personal website - http://martynov.org

Re: How Perl Optimize your code & some code TIPS ;-P
by Aragorn (Curate) on Jan 24, 2003 at 13:40 UTC
    You give 3 examples of concatenating a string:
    $var = "foo" ; $var .= "bar" ; # 1 $var = "foo$var" ; # 2 substr($var,0,0) = "foo" ; # 3
    Of these ways to do it, I find number 2 the most readable. you can see immediately what is going on. But you claim that method 3 is the fastest, because you don't have to "rewrite" $var.

    For fun, I did a little benchmark:

    #!/usr/bin/perl -w use Benchmark; timethese(1_500_000, { Interpolate => q/$var = "String"; $var = "foo$var";/, Concatenate => q/$var = "String"; $var = "foo" . $var;/, Substr => q/$var = "String"; substr($var, 0, 0) = 'foo'/ });
    On a 1.6GHz P4 machine, I got the following results (Perl 5.8.0):
    Benchmark: timing 1500000 iterations of Concatenate, Interpolate, Subs +tr... Concatenate: 1 wallclock secs ( 1.56 usr + 0.00 sys = 1.56 CPU) @ 9 +60000.00/s (n=1500000) Interpolate: 1 wallclock secs ( 1.59 usr + 0.00 sys = 1.59 CPU) @ 9 +45812.81/s (n=1500000) Substr: 2 wallclock secs ( 2.07 usr + 0.01 sys = 2.08 CPU) @ 72 +1804.51/s (n=1500000)
    Perl 5.6.1 and 5.005 give similar results; the substr trick is consistently the slowest.

    So, optimizations can be very useful, but I doubt that it is in this case. I value readability over performance, within reasonable limits. Most of the time, use of another algorithm can yield far better results than micro-optimizations.

    Arjen

    Update: I made a little mistake in showing the examples. Now, #1 and #2 are IMO equally readable, depending on what you're doing.

      In fact, they're identical. Perl rewrites this:

      my $bar = "hello, $foo";
      into this:
      my $bar = "hello, " . $foo;

        In fact, they're identical. Perl rewrites this:

        my $bar = "hello, $foo";

        into this:

        my $bar = "hello, " . $foo;

        $ perl -MO=Terse -e '$var = "String";  $var = "foo$var"'

        $ perl -MO=Terse -e '$var = "String";  $var = "foo" . $var'

        Strip out the addresses (0x*) and do a diff. Almost the same :-)

        Arjen

Re: How Perl Optimize your code & some code TIPS ;-P
by kodo (Hermit) on Jan 24, 2003 at 08:26 UTC
    Nice post! Nothing new to me, but nice to read and easy to understand...

    But I'm not sure about the:
    Other thing, if you want to process the lines, don't cat all the file + inside a @array, to after this process them! Is better to read and p +rocess, this is faster and use less memory: open (FLHD,"$0") ; while (my $line = <FLHD>) { my ($x , $y , $z) = split("::" , $line) ; }

    Part, I'll have to benchmark this again if I find some time. I think I did this before and my result was that if you have lots of things to modify an array/hash is better...

    giant
Re: How Perl Optimize your code & some code TIPS ;-P
by autarch (Hermit) on Jan 24, 2003 at 23:05 UTC

    There are a number of problems with this node's suggestions. First of all, as at least one other person has pointed out, code readability is really important, and generally more important than "writing fast code".

    When speed is important, the two most important tools in a developer's arsenal are profiling (Devel::DProf) and benchmarking (Benchmark.pm). Profiling lets you find the biggest bottlenecks and focus on them. Benchmarking tells you if you're making any noticeable difference.

    Simply "writing faster code" is pointless. Some code paths are travelled very infrequently, so optimizing them is silly. The same holds true with optimizing things like failure reporting (at least for most applications).

    Moreover, experience tells me that what ends up being slow is often unpredictable, and also often localized. So again, there's no point in trying to micro-optimize everything. If 50% of your app is spent in 5% of its code, then making every part of your app 10% faster is time wasted if you could double the speed of that 5%.

      Did you read my final notes in the node?

      "But always remmember, only spent time optimizing your code in areas that will be executed a lot of times, specially inside loops."

      "And don't make code optimization a jail. Perl is wonderful because we are free to do our code how we want..."

      And yes, Devel::DProf and Benchmark are very good! ;-P

      Graciliano M. P.
      "The creativity is the expression of the liberty".

Re: How Perl Optimize your code & some code TIPS ;-P
by MarkM (Curate) on Jan 25, 2003 at 19:25 UTC

    One point that has not been addressed yet is your suggestion that simple operations should always be performed first:

    if ( -s $file || $file =~ /\.new$/ || $file eq 'foo ') { ;}

    Was not write in the best way! First, the -s need to be the last, because we can avoid the query in the HD. the eq need to be the first, since is more easy to process than a regexp.

    If all of the component expressions fail, all component expressions will be executed, therefore, for the purpose of the failure case, the order of the component expressions does not matter.

    The only way of optimizing the above code fragment is to optimize for success. Given an input of 20 file names, what percentage of them will equal 'foo', what percentage of them will end with '.new', and what percentage of them will be empty?'

    If 50% of them end with '.new', 0% or 5% of them equal 'foo' (0 or 1 in 20), and 10% of them are empty, then the "$file =~ /\.new$/" expression should be first, the "$file eq 'foo'" should be second, and the "-s $file" should be third. The idea is that 50% of the success cases can be handled by the first component of the expression, and later components do not need to be evaluated. Then, since we expect "-s $file" to take more than 2X the time than "$file eq 'foo'" to complete, "$file eq 'foo'" is the next component, leaving "-s $file" for last.

    To prove this to yourself, consider that in the case of a '.new' file, the "$file eq 'foo'" would have to be evaluated each time, failing, in the case that you present. Overall, although "eq" is cheap, it is executed many more times than is necessary.

    If we assume that "-s $file" is 200X slower than "$file =~ /\.new$/", than the point at which we would put "-s $file" before "$file =~ /\.new$/" is the point that we expect the input to contain 200X more empty files than files that ended with '.new'.

    My only point being -- optimization is more complicated than sorting operations by the cost of the operation. The expected data set needs to be analyzed. This is why, for example, compiler's can often benefit from using real life profile data when re-compiling. The profiler counts how many times basic blocks are entered, and the compiler can use this data to optimize which branch paths should be flattened, and which should be moved out of the way.

Re: How Perl Optimize your code & some code TIPS ;-P
by slim (Novice) on Jan 25, 2003 at 17:07 UTC
    It seems to me that  if (condition) { statement } is sometime faster than  statement if condition With perl 5.6.0 I get:
    use Benchmark ;timethese (200, { 'if_cond_then_statement'=> sub { ;my $ris=0 ;for my $i (0..100000) { if (int ($i/2)>7) { $ris+=1 } } } ,'statement_if_cond'=> sub { ;my $ris=0 ;for my $i (0..100000) { $ris+=1 if int ($i/7)>0 } } }) Benchmark: timing 200 iterations of if_cond_then_statement, statement_ +if_cond...if_cond_then_statement: 61 wallclock secs (58.33 usr + 0.0 +3 sys = 58.36 CPU) @ 3.43/s (n=200) statement_if_cond: 60 wallclock secs (59.15 usr + 0.01 sys = 59.16 CP +U) @ 3.38/s (n=200)
    Am I missing something?

      For one thing, you have different conditions. In the first you use if (int ($i/2)>7) and in the second you use if int ($i/7)>0. In this case, it won't make too much difference (the latter will execute the conditional code a couple more times) but if you want meaningful results from a benchmark, make sure all implementations under consideration do the same thing.

      On my system, if_cond_then_statement repeatedly beats out statement_if_cond with your original code and with it fixed using either of the if conditions above. The results are closer when both routines use the same if condition.

      Also, it is convention to put the semicolon (;) at the end of the line containing the instruction it terminates, not at the beginning of the next line. It makes for much more readable code.

      --- print map { my ($m)=1<<hex($_)&11?' ':''; $m.=substr('AHJPacehklnorstu',hex($_),1) } split //,'2fde0abe76c36c914586c';
Re: How Perl Optimize your code & some code TIPS ;-P
by ihb (Deacon) on Jan 30, 2003 at 23:58 UTC

    I would like to ++ this node, but I won't. And my reason is that I find this node sloppy and occasionally gives bad advice, and it also doesn't follow its own advice.

    Besides the good points made by Aristotle and gjb I have a couple of my own:
    • Documentation references
      This being a very dense tutorial, the perhaps most important missing thing is references to further reading. I would also like to see some sort of reference to documents, nodes, benchmarks, or something else that backs up certain statements. If this was done, the author would've found out what aragorn pointed out.
    • &-style subroutine calls
      The code always have the ampersand in subroutine calls. This is definately not to recommend. The first line of code in the node almost made me not read further, since it uses &testA without parentheses. But then I remembered that it was about optimizations and since this form of subroutine call was explicitly made available as an optimization feature, I thought it might be intentional. But I doubt it, since it was done without any comment on it, and the ampersand is used throughout the whole node.
    • Shows but doesn't explain
      It tries to teach references in five seconds. This is bad, imho. References is something that should be understood before used, but this thread doesn't explain them at all. It just says how to use them. This can (and will) encourage people to use techniques they don't understand, and that will undoubtedly lead to unholy code. A simple reference to perlreftut would have been very nice to see here, so people know where to read up on the subject after getting a little demostration on how and why you might want to use them.
    • foreach my $Key ( keys %{$rf_h} ) {
      Wasn't the point to not have to copy the whole (large) hash? Here the keys are copied anyway. For large hashes, this has a memory impact. A simple test on w2k gave me a memory difference of about 4.5 MB for a hash with 100000 five char long keys. In this case it would be preferable to do
      my $h; while ($h = each %$rf_h) { ... }
      which doesn't have any significant efficiency loss speed-wise (about 3% according to my preliminary benchmark).
    • open (FLHD,"$0") ;
      Where is the check? What are the quotes doing there?
    • my $fl_data ; 1 while( sysread(FLHD, $fl_data , 1024*8 , length($fl_data) ) ) ;
      On first iteration $fl_data will be undef, and length(undef) emits warning. A side-note: perhaps you want to make sure that sysread() failed due to eof and nothing else? This is checked through the return value of sysread().
    • while (my $line = <FLHD>) {
      Wasn't lexicals like this one supposed to be outside the loop, per own suggestion?
    ihb
      while ($h = each %$rf_h) { ... }
      Careful: that will bail out of the loop early if it encounters a empty '' string key. You need to parenthesize the left side to signify list context:
      while (($h) = each %$rf_h) { ... }
      Update: oh wow, ya learn something new every day.

      Makeshifts last the longest.

        Nope. This is yet another particularity in Perl. This construct is given the same cure as while ($line = <FH>) { ... }.
        > perl -MO=Deparse -we'while ($foo = each %bar) { 1 }' while (defined($foo = each %bar)) { '???'; }
        Cheers,
        ihb
      while (my $line = <FLHD>) {
      Doesn't need to use the my outside, because Perl only do the my 1 time (is always the same scalar reference)! You can test this pasting the reference of $line outside the loop, and see that the next loop will alter that.

      foreach my $Key ( keys %{$rf_h} ) {
      This will load only the Key, but the use of ($key, $value) = each %hash is better. (perhaps the only interesting thing that you wrote in your node!)

      Shows but doesn't explain
      Well, of course, this isn't a tut for reference! How I told, is only a tip! And I think that we can understant very well the basics for ref!

      ...it uses &testA without parentheses...
      Well, perhaps you don't know the difference of &foo and foo(). if you write foo() the parse will tell that foo is an sub only if it has (), and will paste the content of (...) as argument. If you use &foo, this is set as a sub, and the arguments will be the previus @_, if defined, or the content of (...). But if you don't wan't arguments you don't need to use (). But anyway, I was showing how if() evaluate the conditions, in this case the subs, not how to call a sub!

      Well, I didn't like your point of view! Man, be more positive! And if you don't know yet, the world is not perfect! Try to look the perfection in everything or wait for the things be always in your mode, just get for you frustration.

      With all the respect, you could made a good node with other ideas, or ways to do the things better! Not trying to "create" fool erros in my node!

      Graciliano M. P.
      "The creativity is the expression of the liberty".

        Doesn't need to use the my outside, because Perl only do the my 1 time (is always the same scalar reference)!

        Actually, the my() is always done. What happens though is that if old $foo's refcount is zero then it's safe to reuse that slot, so it is reused. But my() is always executed. Afaik, it's the same for

        while(1){ my $var = &foo() ; ... }
        which you do advocate against. That my() has a run-time effect can easily be proven by the following one-liner:   perl -wle'my $last; while (my $line = <STDIN>) { last if $line =~ /^$/; print \$line; $last = \$line; }' It would be insane if my() didn't create a new variable each time it's executed. Per idea, my() does that, but the reuse feature you have noticed is just an optimization.

        This will load only the Key

        No, first a list is created out of the expression keys %{$rf_h} which then is passed to foreach. So an unnecessary list is built. $Key isn't a copy of the element in the list, it's an alias for it. And here the my() works the same as described above.

        Well, of course, this isn't a tut for reference! How I told, is only a tip!

        ... and I don't like the way you tipped. You did something between telling that the feature exists, and telling how to use them. You wrote so that people can cut-n-paste and use this without understanding it. I would have nothing against, in fact I'd like it, if there was something like "Another good thing to use when you have large variables is to pass them as references." and then give a couple of scenarios where references would've been a great benefit. And then a pointer to perlreftut so people can go and learn this wonderful feature.

        Well, perhaps you don't know the difference of &foo and foo().

        Heh. A couple of my previous posts pointing out exactly this issue: Re: Hash values and constants, Re: Re: Forward-referenceing subs, Re: Hash of Hash of Listed subroutines.

        If you use &foo, this is set as a sub, and the arguments will be the previus @_, if defined, or the content of (...).

        This is an ambiguous statement. There won't be any arguments, so the arguments won't be the previous @_. The @_ seen in the called subroutine is the same, I repeat, the same, as the caller's.

        sub foo { print shift } sub bar { &foo; print @_ } # BAD!
        Here the &foo; call will steal the first element in bar's @_.

        The other issue with & is that it makes perl ignore prototypes.

        But anyway, I was showing how if() evaluate the conditions, in this case the subs, not how to call a sub!

        Now that's a lame excuse! Don't you think that demonstrative code should be sane? If I want to demonstrate how a keyboard works, I don't type upside down. I use it as it should be used. And I have it connected to my computer, not my toaster.

        And if you don't know yet, the world is not perfect!

        Yeah, you're right. Why do I bother? Let's just give it up! The world is such a bad place anyway... eh...

        Not trying to "create" fool erros in my node!

        And my reason to ""create" fool erros" would be...?

        I liked the initiative, though.

        ihb