in reply to replacing special characters in file

Here is a simpler variant of the approach suggested by anonymized user 468275 above. It will help you to diagnose the non-ASCII character content of a given file (byte values between 128 and 255), and give you a simple way to copy/paste the numeric references for (strings of) single-byte characters, so you can specify replacements for them. The following script is a simple stdin-stdout filter -- run it like this: "chr-filter.pl orig.file > viewable.file"
#!/usr/bin/perl while (<>) { s/([\x80-\xff])/sprintf "\\x{%02x}",ord($1)/eg; print; }
So, in the output, you'll see things like "\x{e8}" if the input contained an iso-8859-1 encoded version of "è", and so on.

If the input data you're working with happens to be utf8-encoded, then it will be better to use "binmode( ':utf8' )" on the file handle before reading the data, and then you just treat the stuff like unicode characters (see perldoc perlunicode).

Ideally, you'll be able to tell from the context around a give (string of) "\x{HH}" symbol(s) what sort of thing you want to replace it with.

Replies are listed 'Best First'.
Re^2: replacing special characters in file
by Anonymous Monk on Jul 18, 2007 at 22:07 UTC
    Thank you! :-)