in reply to Re^2: How to split big files with Perl ?
in thread How to split big files with Perl ?

Thanks for taking the time to update. Some points to review:

I think this might be one of those situations where it would make sense to take a step back and try to work the best approach out without a computer - how would you solve this problem on paper?

But anyway, I am glad you took the time to work on and test your code! Tested code is important for a good post.

Replies are listed 'Best First'.
Re^4: How to split big files with Perl ?
by james28909 (Deacon) on Dec 28, 2014 at 06:49 UTC
    Yeah, that is not something i am sure about is memory management. Perl is my first language and so far it is the only language i use. The significant slowdown can be fized by usung a small value as a read length, but that does not output fast enough. There is still alot i am not completely positive about, like when you says "your stack will fill up", do you mean the memory?

    As for the logic it is pretty straight forward (or what i thought was ;) ), the iterator is what actually sets the size in which you want to split the file, so doubling it will actual make it split the file into 4gb chunks, and once the iterator hits its mark, it calls the sub again, until $buf != read length (which was the only way i knew of to check for eof.)

    If you set the iterator to a higher value you ofcourse need to adjust the read length of $buf. With that said, What would be a better way to check $buf for end of file? And thanks for pointing all this out to me :)

      Other people have explained the concepts elsewhere, for example one place to start is Wikipedia: see stack and recursion. But the (very) simplified idea in this case is this: When a sub foo calls a sub bar, the state of foo has to be saved somewhere (the stack) so that when bar returns, foo can continue where it left off. This is true for every sub call, even when a sub calls itself (that's recursion). So for every time split_file is called, a new $buf variable is kept on the stack, taking up the memory. The alternative approach is to not use recursion, and instead do everything in a single loop.

      See the documentation of read: it returns zero when the end-of-file is reached. There's also the eof function, but that's rarely needed since usually the return value of read is enough. There is also one more thing to know: In some cases, like when reading from a serial port or network connection, it's possible for read to return less than the requested number of bytes without it always meaning end-of-file or an error. But that case is extremely unlikely for reading files from a disk (maybe impossible, I'm not sure on the internals there).

      Anyway, the way I would think about the algorithm is this: The central thing in the program is the number of bytes written to each chunk. read returns the number of bytes read, therefore the number of bytes to be written to the current file, so that's what we use to keep track of how far we are in each current chunk, and make the decision of whether to start a new chunk or not based on that. You would also need to cover the cases of read returning undef (read error) and read returning zero (end-of-file).