in reply to alter $/ - but why?

As indicated previously, "chomp" is equivalent to "s{$/$}{}" on a string, so if you're going to use it on files of unknown origin (line-endings varying from file to file), it would be a good idea to make sure that $/ is set appropriately for each file.

But that sub does have its drawbacks: apart from the fact it will pull in the full content of a "\r-only" type of text file, there is also the possibility that a single file could contain a variety of patterns involving "\r" and "\n" -- e.g. someone on a unix box quickly edits CRLF-type file, adding a couple "\n-only" lines at the top, or the file contains stuff other than text, etc.

If the goal is simply to be able to handle all sorts of line-termination patterns (and you aren't worried about getting hit with a massive Mac "\r-only" file that'll chew up too much RAM), you could do without the sub and go right to a main processing loop like this:

$/ = "\xa"; while( <FILE> ) { s/\xd?\xa$//; # does what chomp would do, handles CRLF and LF-onl +y for my $line ( split /\xd/, $_, -1 ) # handles CR-only cases { # now we're line-oriented no matter what the input style is... } }
OTOH, if the goal is to be scrupulous and careful about knowing what sorts of line termination are showing up in your data files, write a separate diagnostic for that, have it produce a suitably detailed report for each file (e.g. number of "(\r\n)+", number of "(\n)+", number of "(\r)+"), and then configure your data-processing script(s) to work from that report.