in reply to Regexp and newlines, what am I missing?

If you're having trouble with wierd and uncontrolled whitespace, it might be a good idea to put together a proper diagnosis tool, to at least warn you when the whitespace might cause trouble for the assumptions that are implicit in your script. Something like:
#!/usr/bin/perl =head1 NAME space-hist -- tabulate histogram of whitespace patterns =head1 SYNOPSIS space-hist file.name =head1 DESCRIPTION This will read the full content of the given text file into memory, tokenize the data on whitespace, and then count how many times each distinct whitespace string occurs. The various whitespace string patterns are listed on STDOUT as strings of hex codes, with the frequency of occurrence for each, in descending order of frequency. =cut use strict; die "Usage: $0 file.txt\n" unless ( @ARGV == 1 and -f $ARGV[0] ); $/ = undef; # slurp mode open( IN, "<", $ARGV[0] ) or die "open failed for $ARGV[0]: $!"; $_ = <IN>; @_ = split /(\s+)/; my %whitespace; for my $tkn ( @_ ) { next unless ( $tkn =~ /^\s+$/ ); my $wshex = join " ", map { sprintf "%02x", ord($_) } split //, $t +kn; $whitespace{$wshex}++; } print "Whitespace tokens found in $ARGV[0]:\n"; for ( sort {$whitespace{$b} <=> $whitespace{$a}} keys %whitespace ) { printf "%4d %s\n", $whitespace{$_}, $_; }
(updated to print "Usage" when @ARGV is not as expected)

The point is: you either need to adapt your script to the range of whitespace patterns that actually occur in your data, or else you need to doctor your data so that its whitespace patterns conform to the limitations of your script.

Looking at what's in the data is only the first step, and it's totally worthwhile to make that step as thorough as possible. BTW, using my script as input to itself, I get the following (because I created it with emacs on macosx):

$ space-hist space-hist Whitespace tokens found in space-hist: 125 20 12 0a 0a 11 0a 4 0a 20 20 20 20 2 20 20