Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:
I am reading files from a directory /../test1, each text file has an URL address (http://...) on its first line. I want to parse the URL and save all what follows the label "content" into an array + filename
Input:
a text file with the url on top
http://www.yyy.com/store/application/meraqf?origin=rrr.jsp&event=link( +goto)&content=/asp/administrative/catalog/products/Network/benefits.j +sp.
output:
<Textfile>
filename: {some.txt}
Keys: {asp,administrative,catalog,products,Network,benefits}
{some.txt,asp,administrative,catalog,products,Network,benefits}
</Textfile>
Read the files from a directory and put into an array was not a problem,
my @dirtextfiles=(); while (<*>) { push (@textfiles,$_) if (-f "$_"); }
# Formatting question my $label = "content="; while(defined($textline=<IN>)) { next unless $textline=~/\S/; # ignore blank lines next if $textline=~/^\s*".*"\s*$/; # ignore message lines chomp($textline); #Extract keys in an output file my @try = $textline =~ /$label(.*?)\/(.*?)/g; for (@try) { print "{" . $_. . "," . "}"; } }
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
(jeffa) Re: Extracting info from URL into an array
by jeffa (Bishop) on May 26, 2003 at 22:35 UTC | |
by Anonymous Monk on May 27, 2003 at 15:07 UTC |