in reply to Building a new file by filtering a randomized old file on two fields
thanks so much for your quick input. kcott it seems that your script is working quite well, and I greatly appreciate your work, especially going the extra distance to reformat your own data and test out your script(!). You are correct in that the operating time is not a huge issue here - if I can process a file in an hour or less I'll be plenty happy. I'm still digesting a bit the script you wrote, but I do wonder if you or others have any thoughts on changing the read cache size. As you mentioned and from this link:
http://perldoc.perl.org/Tie/File.html
(section 'memory')
it looks like it is easy to adjust the cache size, and that perhaps decreasing the cache memory limit might be appropriate as I am dealing with files of many short records. Perhaps the best thing for me to do will be to benchmark with a few different memory settings.
Thank you also to sundialsvc4, Anonymous Monk, BrowserUK and RonW for your input. Anonymous Monk and BrowserUK thank you for the pseudocode and that is an interesting blog post and wiki page, definitely reservior sampling seems a good way of approaching the problem of sampling from a large file, although still definitely a fair bit above my head. sundialsvc4 that is also another interesting way to look at it. I am used to thinking in fields/columns rather than bytes. Unfortunately (sorry I should have made this clearer in the original post), while the number of columns is consistent between lines, the number of characters/bytes is not. I really appreciate all of your help!
|
|---|