mikeraz has asked for the wisdom of the Perl Monks concerning the following question:
Update:
It's one of the filesystem, the OS or the clib on the host platform. Perl is able to handle everything the host is able to throw at it, but the host system is spraining itself on the dataset. I've re-created the problem environment on other systems (different host OS) and have had no problems working with file up to (at this time) 18G.
I've just returned from perusing the archives here without finding an appropriate repsonse to my problem. I have the debug output from a process and it's some 11G in size. This is on a Solaris system with largefile support:
split failed after some 44,000,000 lines; Perl is getting to ~ 43,835,940 lines in aettest:/opt/qipgen_log $cat /etc/mnttab ... /dev/dsk/c0t9d0s0 /a ufs rw,intr,largefiles,xattr,onerr +or=panic,suid,dev=800040 1123102217 ettest:/opt/qipgen_log $perl -V | less Summary of my perl5 (revision 5.0 version 6 subversion 1) configuratio +n: Platform: osname=solaris, osvers=2.9, archname=sun4-solaris-64int uname='sunos localhost 5.9 sun4u sparc sunw,ultra-1' ... useperlio=undef d_sfio=undef uselargefiles=define usesocks=undef ... Compiler: cc='cc', ccflags ='-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64',
construct.<br> perl -ne '$prn++ if /18:04:54.631:/; print if $prn' FILENAME
Because I have the first 44,000,000 lines extracted through split the objective has been to pull off the last X lines. (tail -1000000 FILENAME is failing).
Suggestions?
Michael 'yes, next time I'll use the option for the source program to make lots of smaller files' R
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Largefile failure
by sk (Curate) on Aug 04, 2005 at 22:39 UTC | |
|
Re: Largefile failure
by BrowserUk (Patriarch) on Aug 05, 2005 at 03:13 UTC | |
|
Re: Largefile failure
by borisz (Canon) on Aug 04, 2005 at 22:05 UTC | |
by mikeraz (Friar) on Aug 04, 2005 at 22:06 UTC | |
|
Re: Largefile failure
by mikeraz (Friar) on Aug 05, 2005 at 15:07 UTC |