Another kind of information to pass on to those who are looking is a list of useful URL pointers. Normally as I find them, I just use drag and drop to copy a link to a folder. Trouble with that is while it is directly useful for me, it isn't much good in either email or in a node on PM. So finally I built up enough irratation to overcome coding inertia and wrote the following:
#!/usr/bin/perl # url.pl -- Windows .url to html table converter. use strict; use warnings; use File::Glob ':glob'; expand(); print "<ul>\n"; foreach (@ARGV) { filter($_); } print "</ul>\n"; sub filter { my $file = shift; open FILE, $file or die "Couldn't open: $file $!\n";; while(<FILE>) { if (/BASEURL=/) { s/BASEURL=//; print "<li><a href=\"$_\">$_</a></li>\n"; last; } } close(FILE); } sub expand { my @list; foreach (@ARGV) { foreach(bsd_glob($_)) { push @list,$_; } } @ARGV = @list; }
Which produces things like: Update: Removed 'use diagnostics' as suggested.

–hsm

"Never try to teach a pig to sing…it wastes your time and it annoys the pig."

Replies are listed 'Best First'.
Re: Pesky little URL files to HTML
by chromatic (Archbishop) on Dec 17, 2001 at 05:02 UTC
    A couple of nice idioms come to mind. I'd code filter() as:
    sub filter { my $file = shift; local *FILE; open FILE, $file or die "Couldn't open: $file $!\n"; while(<FILE>) { s/BASEURL=// and return qq|<li><a href="$_">$_</a></li>\n|; } }
    Even if that's too concise for you (localizing a glob automagically closes a filehandle when the glob goes out of scope, some people hate returning from within a loop, some people need to see an if there, though it deparses to the same thing), there's no need of doing a match and then a substitution. s/// has a return value for a reason. :) I do much prefer returning things than printing from within a function.

    expand() could also be shorter:

    sub expand { map { bsd_glob($_) } @_; }
    You'll need to call it as @ARGV = expand(@ARGV);, or just do it all in place.

    Of course, we could end up with:

    print "<ul>\n", map { filter($_) } map { bsd_glob($_) } @ARGV, "\n";
    It gets a little ridiculous, optimizing beyond that point. Still, you *could* do it that way.
      This (your code that is) is too cool for snappy comebacks! But to make a feeble attempt, I can only ask: How do you put up with all of those annoying key words in your search for better otimization?—all right,all right I said it was feeble<g>

      –hsm

      "Never try to teach a pig to sing…it wastes your time and it annoys the pig."
        Does it help to say it comes with experience? Part of it is learning and becoming familiar with language features, and part of it is learning common programming patterns and technique.

        With Perl, you'll never unlock the full power of the language until you see lists and how to put them together. If you realize that you're just manipulating elements of a list, map jumps out at you. Since print takes a list, it's easy to do the right thing and to decouple the transformation (HTML-izing URIs) from the action of printing them.

        You don't usually see this all in one step, but if you do a couple of little transformations, you can open the door for bigger gains. They start to add up.

Re: Pesky little URL files to HTML
by drewbie (Chaplain) on Dec 17, 2001 at 03:49 UTC
    Nice little utility, but why the "use diagnostics" in production code? Diagnostics just makes the error messages more verbose and eliminates dups. Seems like a little overkill to me. You can also call perl with the -w argument to turn on warnings. It obviously works either way, but I've usually seen it as the flag.
Re: Pesky little URL files to HTML
by Amoe (Friar) on Dec 23, 2001 at 16:03 UTC
    w00t. This gives me an excuse to post my Perl-IE-favorite-structure-dumper, snappily titled dumpfavs.pl.
    #!/usr/bin/perl -w use strict; use Data::Dumper; use File::Find; my $favorites_path = shift || 'C:/Windows/Favorites'; # assumptions + are amazing my @favorites; find(\&scan, $favorites_path); print "done reading...\n"; print "outputting to dumped_favorites.txt..."; open OUT, ">dumped_favorites.txt" or die "couldn't dump favorites: $!" +; print OUT Data::Dumper->Dump([\@favorites], ['favorites']); close OUT; print "done.\n"; sub scan { return if ($_ eq '.' || $_ eq '..' || !-f $File::Find::name); my $record = {}; { local $_; open FAVORITE, "<$File::Find::name" or die "couldn't open $Fil +e::Find::name: $!"; while (<FAVORITE>) { s/URL=// and chomp and $record->{url} = $_; } close FAVORITE; } $record->{name} = substr($_, 0, -4); # can rely on as extension + is always url push @favorites, $record; print '.'; }
    Fun. I've dumped mine with this many a time.

    --
    my one true love
(jeffa) Re: Pesky little URL files to HTML
by jeffa (Bishop) on Jan 07, 2002 at 22:49 UTC
    Nice!

    I have recently been using a similar construct to turn 'links' in my XML documents (that become an HTML file) into real links. However, using $1 for both the title and the URL was not flexible enough, so (inspired by this site) i decided to use the form [URL|title] and this regex:

    s/\[([^ |]+)\|([^\]]+)\]/<a href="$1">$2<\/a>/g;

    jeffa

    L-LL-L--L-LL-L--L-LL-L--
    -R--R-RR-R--R-RR-R--R-RR
    F--F--F--F--F--F--F--F--
    (the triplet paradiddle)