I did mention that I trade space (memory) for time, didn't I?
Also, my method dosen't involve random seeks, it only involves rewinding the file to the start point, which is a seek to a fixed position in the file. I think you could halve the number of times the file by checking whether the next line number is greater than the current line number and then winding forward to that one, as I think that there is a 50% chance that the next line number to be written is higher than the current number.
That point is moot though, as Tie::File implements a caching scheme and is a much cleaner solution than my naive but memory conserving approach.
perl -MHTTP::Daemon -MHTTP::Response -MLWP::Simple -e ' ; # The $d = new HTTP::Daemon and fork and getprint $d->url and exit;#spider ($c = $d->accept())->get_request(); $c->send_response( new #in the HTTP::Response(200,$_,$_,qq(Just another Perl hacker\n))); ' # web
In reply to Re: Re: Re: Randomize lines with limited memory
by Corion
in thread Randomize lines with limited memory
by natch
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |