http://qs1969.pair.com?node_id=440845


in reply to Re^3: Muy Large File
in thread Muy Large File

Okay. I attempted the above suggestion:

1) Tried increasing $BUFSIZE ||= 2**31; but got this err Negative length at ./test.pl line 9. Tried subtracting 1 in this manner $BUFSIZE ||= 2**31-1 and got this err Out of memory during "large" request for 2147487744 bytes, total sbrk() is 108672 bytes at ./test.pl line 9. I then ran ulimit which came back as 'unlimited'. I'm betting SA's will not change server config or recompile kernel on my behalf so is this a dead-end?

2) BrowserUK, I would like to test threads. If your system was not Solaris 8, please let me know how I /msg you to get your test code. Or post with a caveat.

The last question I had was about leveraging additional cpu's. Can we coax perl to coax the OS to throw some additional CPU's onto the fire? Would this make any difference? Based on the time output above, is it fair to say that this process is completely IO bound? Meaning that adding cpu's would only increase IO WAIT.

Upon searching perlmonks for other large file challenges, I've seen references to sys::mmap and fork::parallelmanager. If anyone has used either of these (or others) and feel strongly about one, please let me know.