FreakyGreenLeaky has asked for the wisdom of the Perl Monks concerning the following question:
Greetings Monks,
I'm trying to figure out how to filter/blank out bad UTF8 chars.
This particular snippet works perfectly to prevent croakage on bad UTF8 (ie, to identify bad UTF8 input prior to further processing):Then, I may want to salvage what I can from $str (ie, filter/remove the bad UTF8 chars) by running it through iconv (which I read somewhere *may* remove bad UTF8 chars):use Encode qw(is_utf8); # check if $str is UTF8 and contains bad UTF8. print "bad UTF8\n" if is_utf8($str) and not is_utf8($str, 1);
However, that involves a slow shell call, so I tried Text::Iconv:iconv -c --from UTF-8 --to UTF-8
But that does not filter out bad UTF8 chars.use Text::Iconv; my $conv = Text::Iconv->new("utf8", "utf8"); $str = $conv->convert($str);
Is there a fast/efficient cpan module/way to strip out any bad UTF8 chars?
If not a selective filter, then as a last resort, is there a brute-force method of simply removing *all* UTF8 chars?
Thanks
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Filtering out bad UTF8 chars
by ikegami (Patriarch) on Oct 12, 2011 at 18:41 UTC | |
by FreakyGreenLeaky (Sexton) on Oct 13, 2011 at 09:55 UTC | |
by ikegami (Patriarch) on Oct 13, 2011 at 14:49 UTC |