einerwitzen has asked for the wisdom of the Perl Monks concerning the following question:

I have a page holding sections of information separated as such:

<section1> blah blah </section1> <section2> blah blah </section2>
What I need is one perl script to read the file and have each section a piece of an array @secs = ("section1", "section2"); and a foreach to print out each as a link/variable to a second script.

The second script would recieve which section was clicked and read the same file, deleting the selected section but leaving the rest as is.

I don't know if this is clear or not, but i'm clueless on how to go about it. Thanks for any help!!

Replies are listed 'Best First'.
Re: remove section by section?
by tachyon (Chancellor) on May 08, 2002 at 05:38 UTC

    Here is some starter code. You essentially have an XML file to parse. You need to know How to RTFM and search the site as this sort of task is very common.

    # get your data into a string: my $data = join '', <DATA>; # declare a hash variable my %secs; # use a regex match to get the bits while ($data =~ m|<([^>]+)>([^<]+)</\1>|g ) { $secs{$1} = $2; } print "Regex Method\n"; print "\nSection: '$_'\n", $secs{$_} for keys %secs; # or use XML::Simple to parse it (generally better) ... use XML::Simple; my $hash = XMLin($data); print "\n\nXML Method\n"; print "\nSection: '$_'\n", $secs{$_} for keys %$hash; __DATA__ <xml> <section1> blah1 blah1 </section1> <section2> blah2 blah2 </section2> <section3> blah3 blah3 </section3> </xml>

    I would suggest a hash as the data structure because it is easy to index to and delete chunks. XML::Simple will write your hash back to a file for you as well as parse the file directly. Read the docs.

    cheers

    tachyon

    s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print