Using 32-bit 5.8.9, loading a 37MB file consisting of 37*1024 chunks of 1024 chars with '~' separators uses 120MB total and no traps:
C:\test>perl -e"print 'x' x 1024 . '~' for 1..37*1024; print 'x'" > hu +ge.file C:\test>dir huge.file 18/03/2010 23:58 38,835,201 huge.file C:\test>\perl32\bin\perl -e" @{ $h{lines } } = split'~', <>" huge.file
Or with 37*32*1024 chunks of 32chars, it took 343MB and no traps:
C:\test>perl -e"print 'x' x 32 . '~' for 1..37*32*1024; print 'x'" > h +uge.file C:\test>dir huge.file 19/03/2010 00:06 40,009,729 huge.file C:\test>\perl32\bin\perl -e" @{ $h{lines } } = split'~', <>" huge.file
Which given a different OS (Vista) and version probably tells you very little except, that unless you've a tiny amount of ram in your machine, this probably isn't memory limit related.
What may be of more interest is that if you set $/ = '~'; you can read the line in bits and then push them onto the array.
On my machine the latter test from above only requires 105MB total and ran much faster.
perl -e"local $/ = '~'; push @{ $h{lines } }, $_ while <>; <STDIN>" hu +ge.file
In reply to Re: 32 Bit Perl causing segmentation fault if data is big
by BrowserUk
in thread 32 Bit Perl causing segmentation fault if data is big
by peacelover1976
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |