amvarma has asked for the wisdom of the Perl Monks concerning the following question:

I have a set of xml files in folder which has the below field. <mm:sessionID>157.235.206.12900397BE4:A</mm:sessionID>, I need to update this field regularly with new session id, which I have it from a login file. Can anyone tell me how to add a new value in <mm:sessionID> field in xml files under the folder("/usr/home/automation/Sanity/xml/) using perl script? Here is what I have but value is not getting updated.
opendir(DIR, "/usr/home/automation/Sanity/xml/"); my @ssl = grep /\.xml$/, readdir(DIR); close(DIR); foreach $ssl (@ssl) { open(SSL, "$ssl"); my @in = <SSL>; close(SSL); foreach (@in) { s/sessionID/$SID/; } }
Anything wrong here. Thnks amvarma

Replies are listed 'Best First'.
Re: update a value in xml file
by moritz (Cardinal) on Apr 12, 2011 at 12:22 UTC
    You only ever change the value in the string in your program, and never write it back to the file.

    Since in-place edits are quite error prone, I'd suggest to write a copy with the updated content, and then delete the original and rename the copy.

Re: update a value in xml file
by InfiniteSilence (Curate) on Apr 12, 2011 at 13:30 UTC

    I think you might want to take a closer look at your entire solution rather than continuing to do this or even make copies of the file.

    The whole idea behind a time stamp is that it signifies when something was done. If you are going in and changing a timestamp then it means that the timestamp is, essentially, useless. I would recommend that you change your entire solution to some kind of persistent datastore like DBM or a relational database and then produce XML when, and if, it is actually necessary.

    BTW: Your current solution is prone to disk failures and poor performance due to disk access/seek time issues.

    Celebrate Intellectual Diversity

Re: update a value in xml file
by locked_user sundialsvc4 (Abbot) on Apr 12, 2011 at 15:59 UTC

    I strongly second the consensus that your workflow should move from &ldqo;one so-called generation of input files,” which will emerge completely unchanged, to produce “one complete generation of new output-files, which are completely replaced.   You design the workflow so that you are always producing new generations of the entire collection of files.   Keep the old ones... disks are big and cheap, and pkzip works now just as well as it ever did.   This gives you a completely repeatable process, with a history and with the capability to re-run at any point.

    For the actual XML processing, use a powerful package like XML::Twig, and get to know (and use well) “XPath expressions.”   Beware:   Do not treat an XML file as “text,” and do not write an algorithm that cannot be shown to work correctly in the general case.   (The mere fact that “it seems to work with the data that I tried” doesn’t cut it.)