if you make $linesize * n == $blocksize then you can read a random block of n lines from each of the 50 file handles, randomise these lines in mem, write them out. You only have to mark entire blocks as used. The disk reads will be n times more efficient. Of course lines that were within n of each other will stand a good chance of ending up within n*25 after the randomisation. Perhaps it will be fast enough that you can run two passes and this will be good enough for engineering
Cheers,In reply to Re^3: Strategy for randomizing large files via sysseek
by Random_Walk
in thread Strategy for randomizing large files via sysseek
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |