To provide a little more detail on what's happening here, your code, as written, reads the entire file in at once. If the file is in GB, you'll need GB of memory dedicated to that process just to handle the data--not counting the memory consumed by the OS, Perl itself, and so forth.
As others have suggested, you need to iterate over the filehandle, so you read one line at a time, and operate on it. You may need an intermediate step of writing the small files out unsorted, then rereading them to sort them, if they need internal sorting, since you can't hold the entire dataset in memory.
Additionally, you would be better off to use 3-argument open and a scalar filehandle:
open my $input_fh, '<', $filename or die $!;
unless you're absolutely certain your filename string is free of characters that could be misinterpreted by 2-argument open (and frankly, it's a good habit to be in anyway.) The scalar filehandle gives you scoping, which bareword filehandles don't really have.
(And, as others have also pointed out, you should follow the markup directions right below the box you write your post in that say not to use
<pre> tags.)