in reply to Largefile failure

I work with such large files too but I use data processing software (such as SAS and other proprietary ones)

Anyways here are some wacky thoughts!

0. Autoflush using some perl IO modules? or  $|++?

1. Compress the file and zcat the file to perl?

2. You are able to read certain number of lines in perl. Keep track of the bytes and then seek next time after fail? Or split the prog into many seek pieces?

3. Load it into MySQL and then access?

(Never tried that large file in MySQL tho).

good luck and let us know if you find any nifty work arounds!!

-SK

PS: In unix can you do  limit and make sure your don't have a limit on filesize for your session?