in reply to Re: How to split big files with Perl ?
in thread How to split big files with Perl ?


This works much better :)

This splits the file into 2gb chunks, I have tested on about 25-30 iso's is have stored on my PC and it works great, though sometimes writing performance is a little bit slow. You can also change how many gb's you want to split it into by changing the iterator's value.
use strict; use warnings; files(); sub files { foreach (@ARGV) { print "processing $_\n"; open my $fh, '<', $_ || die "cannot open $_ $!"; binmode($fh); my $num = '000'; my $iterator = 0; split_file( $fh, $num, $_, $iterator ); } } sub split_file { my ( $fh, $num, $name, $iterator ) = @_; my $split_fh = "$name" . '.split'; open( my $out_file, '>', $split_fh . $num ) || die "cannot ope +n $split_fh$num $!"; binmode($out_file); while (1) { $iterator++; my $buf; read( $fh, $buf, 32 ); print( $out_file $buf ); my $len = length $buf; if ( $iterator == 67108864 ) { #split into 2gb chun +ks $iterator = 0; $num++; split_file( $fh, $num, $name ); } elsif ( $len !~ "32" ) { last; } } }
Works pretty quickly! split almost 5gb in 4.4333 mins. I do see a decrease in performance sometimes, though other times it writes very quickly. Go ahead and test it on one of your iso's. What would be the most efficient read/write buffer?

Replies are listed 'Best First'.
Re^3: How to split big files with Perl ?
by RichardK (Parson) on Dec 27, 2014 at 17:20 UTC

    The most efficient block size will depend on lots of things, but the memory page size of your OS will likely be the most significant. 32 bytes is way too small, I'd start with 4k or 8k and go up from there. Why not try several different multiples of 4K and see which one works best for you?

    Also, read returns the number of bytes actually read so there's really no need to use length.

    my $len = read($in,$buf,4*1024); ...

    And $len is an integer so it would be better to use the numeric not equal '!=' rather than the pattern match operator.

      Well i did turn up the speed some, but I would watch my memory fill up as it was running. It would punch out a 2GB file in no time (less that 10 secs or so) but then I would see a dramatic slowdown, as in it would only be writing KB/s instead of MB/s. I will try your suggestion as well, thanks

      Also, $len = length $buf; gets the length of $buf, then later on checks and makes sure it is the same size as the read length. If it is not the same size, then that is more than likely the end of the file. I need to figure out a better way to check for end of file actually.
Re^3: How to split big files with Perl ?
by Anonymous Monk on Dec 28, 2014 at 04:22 UTC

    Thanks for taking the time to update. Some points to review:

    • Calling split_file recursively means that your stack will fill up as the number of chunks goes up. You've got one buffer per sub call, so that's probably the source of the memory usage and slowdown you reported.
    • Your algorithm/logic, even though it works, is confusing, and actually can possibly go wrong: Right after you read from the file, you use $iterator to determine whether to call split_file again - I think you need to look at $len first. Keeping a running count of the bytes written to the current chunk and comparing it to the desired chunk size might be better. Also, inside the while(1) loop, you don't seem to consider what happens after the call to split_file - the loop keeps going! In fact, if the file being split is exactly divisible by the chunk size, you create one final .splitNNN file that is empty.
    • This is not correct: open my $fh, '<', $_ || die "cannot open $_ $!";, since it gets parsed as open(my $fh, '<', ($_ || die("cannot open $_ $!"))); (you can see this by running perl -MO=Deparse,-p -e 'open my $fh, "<", $_ || die "cannot open $_ $!";'). Either write open my $fh, '<', $_ or die "cannot open $_ $!"; (or has lower precedence) or write open( my $fh, '<', $_ ) || die "cannot open $_ $!";
    • You're still not checking the return value of read, which is undef on error.
    • The code could also use a bit of cleanup. Just a couple of examples: The name $split_fh is a bit confusing, and you could append $num to it right away. In split_file you set $iterator = 0; but then don't use it in the recursive call to split_file.

    I think this might be one of those situations where it would make sense to take a step back and try to work the best approach out without a computer - how would you solve this problem on paper?

    But anyway, I am glad you took the time to work on and test your code! Tested code is important for a good post.

      Yeah, that is not something i am sure about is memory management. Perl is my first language and so far it is the only language i use. The significant slowdown can be fized by usung a small value as a read length, but that does not output fast enough. There is still alot i am not completely positive about, like when you says "your stack will fill up", do you mean the memory?

      As for the logic it is pretty straight forward (or what i thought was ;) ), the iterator is what actually sets the size in which you want to split the file, so doubling it will actual make it split the file into 4gb chunks, and once the iterator hits its mark, it calls the sub again, until $buf != read length (which was the only way i knew of to check for eof.)

      If you set the iterator to a higher value you ofcourse need to adjust the read length of $buf. With that said, What would be a better way to check $buf for end of file? And thanks for pointing all this out to me :)

        Other people have explained the concepts elsewhere, for example one place to start is Wikipedia: see stack and recursion. But the (very) simplified idea in this case is this: When a sub foo calls a sub bar, the state of foo has to be saved somewhere (the stack) so that when bar returns, foo can continue where it left off. This is true for every sub call, even when a sub calls itself (that's recursion). So for every time split_file is called, a new $buf variable is kept on the stack, taking up the memory. The alternative approach is to not use recursion, and instead do everything in a single loop.

        See the documentation of read: it returns zero when the end-of-file is reached. There's also the eof function, but that's rarely needed since usually the return value of read is enough. There is also one more thing to know: In some cases, like when reading from a serial port or network connection, it's possible for read to return less than the requested number of bytes without it always meaning end-of-file or an error. But that case is extremely unlikely for reading files from a disk (maybe impossible, I'm not sure on the internals there).

        Anyway, the way I would think about the algorithm is this: The central thing in the program is the number of bytes written to each chunk. read returns the number of bytes read, therefore the number of bytes to be written to the current file, so that's what we use to keep track of how far we are in each current chunk, and make the decision of whether to start a new chunk or not based on that. You would also need to cover the cases of read returning undef (read error) and read returning zero (end-of-file).