In the last couple of days I have been wondering (again) what do people write in UNIX shell scripts and how could those be implemented in Perl? In which cases is shell preferable to Perl?

As I have not written many shell scripts in my life I tried to look for some examples and bumped into the Wicked Cool Shell Scripts. I thought it would be cool (and wicked) to implement them in Perl.

So I started to do it but while each one is relatively simple doing all 101 is quite a lot of work. Dear Monks, please help me translating them. Pick one of the scripts, implement it in Perl and post it here.

Let me start with the first script....

Update: strangely I did not find earlier a book with the title: Wicked Cool Perl Scripts that was pointed out by planetscape so I retitled the node. Unfortunatelly I cannot seem to find the scripts from this book but that would not be related to my question anyway.

  • Comment on Wicked Cool Shell Scripts implemented in Perl

Replies are listed 'Best First'.
Re: Wicked Cool Shell Scripts implemented in Perl
by merlyn (Sage) on Mar 27, 2006 at 12:25 UTC
Re: Wicked Cool Shell Scripts implemented in Perl
by dragonchild (Archbishop) on Mar 27, 2006 at 11:40 UTC
    In general, my shell scripts are those which would take longer for me to write in Perl. I tend to write shell scripts to
    • restart apache doing a complete takedown (requires a sleep)
    • do post-installation permissions changes
    • other similar tasks that don't require any special handling

    My criteria for good software:
    1. Does it work?
    2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
001-inpath verify if command is in PATH
by szabgab (Priest) on Mar 27, 2006 at 06:56 UTC
    Original: 001-inpath
    #!/usr/bin/perl -w use strict; # inpath - verify that a specified program is either valid as-is, # or can be found in the PATH directory list. sub in_path { my ($cmd, $path) = @_; my $found = 0; foreach my $directory (split /:/, $path) { return "exe" if -x "$directory/$cmd"; $found = 1 if -e "$directory/$cmd"; } return $found ? "plain" : "not found"; } sub check_for_command_in_path { my ($command) = @_; if ($command =~ m{^/}) { return 'exe' if -x $command; return -e $command ? 'plain' : 'not found'; } else { in_path($command, $ENV{PATH}); } } die "Usage: $0 command\n" if @ARGV != 1; my $ret = check_for_command_in_path($ARGV[0]); print "Executable $ARGV[0] found in PATH\n" if $ret eq "exe"; print "$ARGV[0] found but not executable" if $ret eq "plain"; print "$ARGV[0] not found in PATH\n" if $ret eq "not found";

      Note: I'm not directing these comments against you (the OP) since you only ported a tool written by somebody else.

      The inpath gizmo does basically the same thing as the shell util which, with a couple additions of very questionable usefulness:

      1. Search in the current directory. Why treat the cwd differently? The cwd is not put in the PATH in most standard shell setups mostly for security reasons. If you want it in your PATH, damn ADD IT! This is not MS-DOS people! Searching for an executable in other locations than the directories in PATH defies the semantics of PATH. The shell will search for an unqualified executable ONLY in directories from PATH, again, do I have to say, unlike MS-DOS and derivatives.
      2. Search for non-executable files in PATH. This is moronic, plain and simple. It serves no other purpose than to complicate the interface. Yeah, anybody can come with any excuse, but the truth is, the cost of feature creep outweights the benefit of usefulness, even for such small a program. Especially when the feature is in fact a non-feature.

      The first addition almost surely betrays the author's roots and nostalgia... The second addition, I'm beginning to think he felt a little "creative", going for a little filler, a little embellishment, particularly when the core "algorithm" was already in place and it cost only a few keystrokes...

      I'm writing this because I'm sick of these "creative" people. Is it just me or does anybody else feel the same? This little crappy program is just a symptom of this "creativity" that IMO contributes to much of the sloppiness, lameness and feature creep we find in programs we use everyday. Watch out for an idiot with a drive.

        yep, you are right in this case, now I wonder how well can I really use that book to learn what people use shell scripts for.
      total rewrite using File::Find::Rule:
      use File::Find::Rule; sub in_path { my ($cmd, $path) = @_; my @files = File::Find::Rule->file()->name($cmd)->maxdepth(1)->in( s +plit /:/, $path ) or return 'not found'; return grep(-x $_,@files) ? 'exe' : 'plain'; }
      ditto on calin's comments ... esepecially that this function should just return a boolean, in which case it becomes one line:
      return File::Find::Rule->file()->executable->name($cmd)->maxdepth(1)-> +in( split /:/, $path ) ? 1 : undef;
Re: Wicked Cool Shell Scripts implemented in Perl
by Crackers2 (Parson) on Mar 28, 2006 at 01:26 UTC

    To answer the "when is shell preferable" question:

    perl usually resides on /usr, which makes it hard to use for things like boot scripts (one of which will probably be used to actually mount /usr).

    Also, it has a size (after doing a quick rpm -i perl) of 30+ MB. In contrast, the network boot image I use at work to recover machines has a total unzipped size of about 16MB, which includes a regular glibc, bash and all the utilities you'd ever need (well, close).

    I'd say those are the two main reasons for me to use shell scripts: unavailability of Perl, and the size of a standard Perl installation.

'Wicked Cool Shell Scripts' is a _don't_buy_
by TomDLux (Vicar) on Mar 28, 2006 at 15:48 UTC

    I cannot reccommend "Wicked Cool Perl Scripts", and this brief thread tells me "Wicked Cool Shell Scripts" is no better.

    The title of the book appealed to me. I was hoping to find something along the line of Randal Schwartz's columns. I wanted a pithy page or two explaining some language features. Accompanying it would be a compact and efficient block of code achieving a useful result. Instead, I found something more along the lines of "Matt's Script Archive".

    Not only are the scripts of dubious value, but the code is mediocre, rather than presenting an ideal to which we should aspire. Some code is actually wrong.

    For example, in the first chapter, while discussing File::Find, Steve Oualline returns undef to indicate there are no values to return. It would be absolutely correct if used a return statement on it's own:

    return;

    Instead, he returns undef explicitly:

    return undef;

    That's a minor mistake, you can get away with it so long as the subroutine is not called in a list context. Unfortunately, that's exactly how the routine is being called. It is expected to return a list of values it has found, the return value being stored in an array. Later, a loop iterates over the contents of the array, producing some final processing and generating output.

    The problem with returning undef, rather than merely returning, is that rather than generating an empty list, the code produces an array with one element, 'undef'. So the final loop iterates once, and produces an empty line of output, even though no values were found.

    Now that's not a mortal sin, and not something I would flame someone for, if I saw it in a minor script from someone who is trying to improve his Perl. However, in a book claiming to have achieved COOLness, even if not the pinnacle of COOLness, I expect correctness, at the very least.

    Other problems include the use of prototypes, the use of C-style loops rather than Perlistic iteration, and more.

    Prototypes were a brilliant idea when they came onto the scene, 5 or 6 years ago, unfortunately they create more problems than they solve. Worse, they are totally irrelevant to object-oriented code. Prototypes are best left on the scroll of Perl history as a noble effort that didn't achieve what it set out to. They certainly do not belong in a book, three years or more after the idea was deprecated, being presented as the preferred way to write code.

    There are a few situations where C-style loops are usefull in Perl. Most of the time, they indicate a poor understanding of the Perl mindset. While incrementing an index is reasonably efficient C, not quite as efficient as incrementing a pointer, it makes poor use of Perl. Instead of using several operations inefficiently, it is far better to let the language handle the details of iterating over the loop. Not only is it faster, but index variables litter code .. they are part of HOW something is achieved, the implementation, rather than WHAT is achieved. Compare:

    for ( my $i = 0; $i <= $#myarray; $i++ ) { # do something with $myarray[$i]; } for my $elem ( @myarray ) { # do something with $elem }

    If the goal is to do something with the index numbers, the first example is appropriate. But where the goal is to access or manipulate the elements, the second example is far better ... the implementation is closer to the way we think, making it more self-documenting.

    The unfortunate achievement of "Wicked Cool Perl Scripts" is to make me appreciate the few Perl Saints who DO provide an example I aspire to: Randal, Damian, MJD, and the others.

    TomDLux

    --
    TTTATCGGTCGTTATATAGATGTTTGCA

Re: Wicked Cool Shell Scripts implemented in Perl
by planetscape (Chancellor) on Mar 28, 2006 at 05:00 UTC

    Thanks for the retitle. I was quite surprised when I clicked on this node and discovered you were talking about shell scripts... ;-)

    FWIW, code for Wicked Cool Perl Scripts may be found here.

    HTH,

    planetscape