You can set the input record separator ($/) to a byte quantity rather than an EOL character, and read in chunks that way. Then write each chunk into a different file. Refer to perlvar.
You can use File::Find to get all your filenames, or just readdir. It would be easy to generate filenames along the lines of original_name.1 original_name.2, .3, .4, etc.
I'm wondering though, if there's not a better solution. Rather than splitting up hundreds of log files into thousands of chunks, why not devise a Perlish solution to scan through the files for specific things you're looking for? It shouldn't matter how big the data set is, as long as you come up with an efficient way of finding what you're looking for within that data set. Are you looking for a particular event? Use Perl to scan your hundreds of files and index where the events are recorded.
Dave
In reply to Re: File splitting help
by davido
in thread File splitting help
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |