in reply to ISO 8859-1 characters and \w \b etc.

If you have Perl 5.8, you could convert the input 8859-1 data to utf8, do the regex matching, and convert back to 8859-1 for output (assuming you don't want to just switch everything over to utf8 globally). Something like this would work:
#!/usr/bin/perl use strict; use Encode; while (<>) { my $utf8 = decode( 'iso8859-1', $_ ); my @words = ( $utf8 =~ /\b(\w+)\b/g ); print join "\n", map { encode( 'iso8859-1', $_ ) } @words; print "\n"; }
The output is one "word" per line, treating accented letters as "\w", and things such as currrency symbols, quotes, inverted question mark, non-breaking-space, etc, as things that trigger "\b".

Assuming the 8859-1 text is in a file, the example above works as follows (let's call the script "latin1-tokenizer"):

latin1-tokenizer < latin1.txt > latin1.tkns
That example could also be written without the encode/decode calls, using PerlIO layers instead:
#!/usr/bin/perl use strict; open( IN, "<:encoding(iso8859-1)", $ARGV[0] ) or die "couldn't read $A +RGV[0]: $!"; binmode STDOUT, ":encoding(iso8859-1)"; while (<IN>) { my @words = ( /\b(\w+)\b/g ); print join "\n", @words; print "\n"; } # run it like this: tokenizer latin1.txt > latin1.tkns
(I'm unsure about posting test data with actual latin1 characters, so I leave it to you to try it on your own data.)