in reply to Help with multiple line regex subsitution

The s modifier might help you.

$ perl -le ' > $str = qq{Keyword string\nBlah\nKeyword\nstring\nBlech\nKeyword "str +ing"}; > print $str; > print; > $str =~ s{Keyword(\s+"?)string}{Keyword${1}1_string}sg; > print $str;' Keyword string Blah Keyword string Blech Keyword "string" Keyword 1_string Blah Keyword 1_string Blech Keyword "1_string" $

I hope this is helpful.

Cheers,

JohnGG

Replies are listed 'Best First'.
Re^2: Help with multiple line regex subsitution
by AnomalousMonk (Archbishop) on Apr 14, 2009 at 20:04 UTC
    The s modifier might help you.
    The  //s regex modifier would not seem helpful since it modifies the behavior of the  . (dot) regex metacharacter, and this metacharacter is used in neither the OP nor in your reply.

    The OPer seems to be looking for a simple way to get the contents of the file as a single string. A simple way to do this is with the idiomatic statement
        my $string = do { local $/;  <$fh> }
    (where  $fh is an already-opened file handle). After modification, this string can be printed with a single  print statement. Caution: this approach does not scale well to files of more than several megabytes (depending on your system's memory).

    Also, the following regex accepts only balanced double-quotes on the "string" and requires the trailing '{' character.

    >perl -wMstrict -le "my $s = join '', @ARGV; $s = eval qq[qq[$s]]; my $words = qr{ foo | bar }xms; $s =~ s{ (Kw \s+) (\"?) ($words) (\2 \s* \{) }{$1${2}1_$3$4}xmsg; print 'output:'; print $s; " "Kw \n foo \n { \n Kw \n \"bar\"\n { \n Kw foo { Kw \"foo\" { " "Kw \"foo { Kw foo\" { Kw foo Kw {" output: Kw 1_foo { Kw "1_bar" { Kw 1_foo { Kw "1_foo" { Kw "foo { Kw foo" { Kw foo Kw {
    Updates:
    1. The  //g modifier which you use in your reply is, however, critical in a single-string solution!
    2. Added regex solution.
      Caution: this approach does not scale well to files of more than several megabytes (depending on your system's memory).

      In the unlikely case that this is a problem, it's fairly straightforward to switch to reading blocks rather than slurping the entire file in one go.

      You just have to remember to handle the case where a sequence you want to match is split across a block boundary, and that's just a case of checking to see if the last part of the block could possibly be a prefix of a matching sequence, and if it could, then prepending it to the next block you read. (Implementing this is left as an exercise for someone in a time zone where it's earlier in the day.)

      But it's really very unlikely that this will be a problem. A file with a format like that is unlikely to reach even one megabyte -- if it does, then either it should have been split up into multiple files long ago (if it's intended for human reading and writing); or it should be being stored in a standard format like XML or JSON, and transformed with a standard library, not regular expressions.

        In the unlikely case that this is a problem, it's fairly straightforward to switch ...
        I think we are not so far apart on this.

        Switching involves abandoning one approach to solving the problem (and I might more clearly have expressed the idea that 'this approach' meant 'read the entire file to a string, operate on the string, and write the string back to a file') for a significantly different approach, perhaps one of those outlined in the balance of your reply. This is what I mean when I write that the approach does not scale well.

        I agree that, given what one can divine about the OPer's data from the OP, the problem is not likely to arise. In fact, I would go further and say that, in my experience, problems like this never happen – until they happen, and then you have to go back and change a bunch of stuff and re-test everything; which, while straightforward, still costs time, effort and risk; hence, the caution about lack of scalability.