I'm running into a bizarre rewinding file handle problem on Solaris 8 running a 32 bit perl 5.8.4 with PerlIO and large files. I'm
Totally run-of-the-mill stuff, nothing special at all. I started noticing that I was processing more records than wc -l reports that there are in the file. None of my test cases can reproduce this, and it didn't happen while walking through the debugger. To make matters worse, the number of records that the file "grows" by varies, but tends to be in the range of 500 per 100k. Notes:
If I call tell after each pause and before each continuation, I get:
20040928 13:51:27 startProcessInputFile(PHILLY): pausing; current file + offset is 11530000 20040928 13:51:40 continueInputLoad(PHILLY): continuing; current file +offset is 11477968 20040928 13:52:04 continueInputLoad(PHILLY): pausing; current file off +set is 23064612 20040928 13:52:19 continueInputLoad(PHILLY): continuing; current file +offset is 22977360 20040928 13:52:53 continueInputLoad(PHILLY): pausing; current file off +set is 34602683 20040928 13:53:07 continueInputLoad(PHILLY): continuing; current file +offset is 34593093 20040928 13:53:33 continueInputLoad(PHILLY): pausing; current file off +set is 46134989 20040928 13:53:47 continueInputLoad(PHILLY): continuing; current file +offset is 46040223 20040928 13:54:11 continueInputLoad(PHILLY): pausing; current file off +set is 57668448 20040928 13:54:24 continueInputLoad(PHILLY): continuing; current file +offset is 57664020 20040928 13:54:50 continueInputLoad(PHILLY): pausing; current file off +set is 69205366 20040928 13:55:06 continueInputLoad(PHILLY): continuing; current file +offset is 69145548 20040928 13:55:38 continueInputLoad(PHILLY): pausing; current file off +set is 80739978 20040928 13:55:52 continueInputLoad(PHILLY): continuing; current file +offset is 80667412 20040928 13:56:18 continueInputLoad(PHILLY): pausing; current file off +set is 92271131 20040928 13:56:35 continueInputLoad(PHILLY): continuing; current file +offset is 92205201 20040928 13:57:00 continueInputLoad(PHILLY): pausing; current file off +set is 103808049 20040928 13:57:16 continueInputLoad(PHILLY): continuing; current file +offset is 103807699 20040928 13:57:42 continueInputLoad(PHILLY): pausing; current file off +set is 115343814 20040928 13:57:55 continueInputLoad(PHILLY): continuing; current file +offset is 115293110 20040928 13:58:30 continueInputLoad(PHILLY): pausing; current file off +set is 126879579 20040928 13:58:45 continueInputLoad(PHILLY): continuing; current file +offset is 126807910
The obvious workaround is to stash the tell offset and just seek to it when I'm done, but the client is a large bank, and I'm not up for kludging this issue. Any ideas or insight would be greatly appreciated. Thanks.
UPDATE
Fixed by allowing more than 255 open file descriptors via exporting PERLIO=perlio; there's a tangential question that spawned out of this. I need to do some more forensics to determine what sort of errors to expect when a forked process can't grab STDERR to complain, and why this affected my FH rather than just failing downstream. For now, It Works, so I'm relatively happy. Thanks for the feedback.
Ezra
In reply to drifting IO::File offset by ezra
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |