journey has asked for the wisdom of the Perl Monks concerning the following question:

Dear Monks,

The following code aims to extract all url from html files located in the local directory (yes, bad practice... I promise to change that later) After figuring HTML::SimpleLinkExtor might help me, I realized I just didn't know what I was doing. Actually I know: it is called "cargo cult(ing)"... Anyways, I am still labouring over chapter 12 of the Camel book trying to understand how to use OO stuff. I would be grateful if anyone was kind enough to offer a few pointers on what is causing the script to return only the file names, apparently failing to open any of the files.
use strict; use warnings; use HTML::SimpleLinkExtor; my $file; my $FileOut = "output.txt"; open FileDOut, ">$FileOut" or die "Can't open output file $FileOut: $! +"; print "Summary written in file $FileOut\n"; opendir(DIR, "."); my @files = readdir(DIR); closedir(DIR); foreach $file (@files) { if ($file =~ /^(.*)htm/i){ print FileDOut "$file\n"; open my ($fh),"<",$file or die "Couldnot do it: $!"; my $extor = HTML::SimpleLinkExtor->new(); $extor->parse_file(<$fh>); print FileDOut "$_\n"; } }
The messages I get are:
 Unsuccessful open on filename containing newline at C:/Perl/lib/HTML/Parser.pm line 95...
and Use of uninitialized value $_ in concatenation (.) or string at line 26.
Regards & thanks

UPDATE------------
Thanks for the valuable advice, I am progressing but not quite there yet. After modifying the script according the following posts, I now have to figure why the successive @links appear the right size but are empty. I get as many warnings:Use of uninitiated value $links[3] in join or string as I have links. But many thanks for clearing up the method invocation and other advice!

UPDATE-2------------
I did not quite make it but I finally resorted to HTML::LinkExtractor and recipe 20.3 of the Perl Cookbook.

Replies are listed 'Best First'.
Re: Extracting URL from HTML - but unfamiliar with OO module syntax
by citromatik (Curate) on Jan 30, 2009 at 09:37 UTC

    Here are some comments to your code:

    • You don't need to open the input file before passing it to $extor->parse_file
    • Use the glob function (glob("./*.html)) instead of readdir and parse the resulting files for the ".html" extension
    • You are not extracting the links (you are not using the method $extor->links)
    • You are using $_ inside the foreach loop, but that variable is not initialized (because you are using a "loop variable" ($file))
    • $file is declared outside the foreach loop, you should avoid this and restrict the scope of the variable at the minimum
    • You should get the habit of closing the filehandles

    Here is a version of your code with all these issues corrected (untested):

    use strict; use warnings; use HTML::SimpleLinkExtor; my $FileOut = "output.txt"; open FileDOut, ">$FileOut" or die "Can't open output file $FileOut: $! +"; print "Summary written in file $FileOut\n"; while (my $file = glob ("*.html")){ print FileDOut "$file\n"; my $extor = HTML::SimpleLinkExtor->new(); $extor->parse_file($file); my @links = $extor->links(); print FileDOut "$_\n" for (@links); print FileDOut "\n"; } close FileDOut;

    citromatik

Re: Extracting URL from HTML - but unfamiliar with OO module syntax
by libvenus (Sexton) on Jan 30, 2009 at 09:15 UTC

    I don't think you need to pass a file handle to the parse_file accessor,passing a filename will do. Then you need to use the links accessor to get all the links in the htm file -

    foreach $file (@files) { if ($file =~ /^(.*)htm/i){ print FileDOut "$file\n"; my $extor = HTML::SimpleLinkExtor->new(); $extor->parse_file($file); my @all_links = $extor->links; print FileDOut join "\n",@all_links; } }
Re: Extracting URL from HTML - but unfamiliar with OO module syntax
by gone2015 (Deacon) on Jan 30, 2009 at 10:13 UTC

    As above... And I doubt that parse_file wants the entire file passed to it as an array of lines ! Which is what:

    $extor->parse_file(<$fh>);
    will do. Which is probably the source of the first message you were getting: