As long as you don't try to read the whole file into memory at once, you will be fine.
Just read a line at a time and open a new outputfile whenever you see that the size is too big and the current line starts with "100". Something like this: while (defined $line = <$inputFileHandle>)
while ($line = <$inputFileHandle>) { if ($sizeSoFar > 1e9 and $line =~ /^100/) { $outputFileName++; open $outputFileHandle, '>', $outputFileName or die "Cannot open $ +outputFilename for writing: $!\n"; $sizeSoFar = 0; } print $outputFilehandle $line; $sizeSoFar += length($line); }
In reply to Re: Can I split a 10GB file into 1 GB sizes using my repeating data pattern
by SuicideJunkie
in thread Can I split a 10GB file into 1 GB sizes using my repeating data pattern
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |