in reply to Search and replace on a 130000+ line file.

If you have the disc space then the simplest option is a stream edit, kinda like this (untested code):

open (INFILE,"<infile.txt"); open (OUTFILE, ">outfile.txt); while (<INFILE>) { # $_ will hold one line of your file, and the loop will (slowly) go t +hrough your entire file #Mangle $_ here e.g. s/^\+host-1-3-1-10:/$ipnum/; #print it into the other file print OUTFILE $_; }

Or variations on the above, to suit.

Update: I forgot about the ip bit. Depending on how many there are you could push them into a hash or use some database solution, as tilly didn't quite mention in the chatbox.

____________________
Jeremy
I didn't believe in evil until I dated it.

Replies are listed 'Best First'.
Re^2: Search and replace on a 130000+ line file.
by tadman (Prior) on May 11, 2001 at 22:25 UTC
    If the substitution table is too large to load into a hash in its entirety, you could load in a portion of it, process the second file, and load in additional portions until it was completed. This way, you would have to make n passes of the secondary file, where the substitution file is loaded into n pieces.

    This is, of course, assuming that you can't fit the substitution file entirely into RAM in a hash, and that using a tied hash is out of the question because of speed concerns. However, 130K lookups in a tied hash cannot take that long, so it might be a vialbe solution. If you were processing a file with 130M lines, then you are going to have to think of something else.

    A curious observation, though, is that in your example the "output" file is the same as the input file, just reorganized. I'm sure this is because of simplification on your part.