Re: Loading Large files eats away Memory
by dws (Chancellor) on May 26, 2005 at 06:51 UTC
|
Well, yes. Loading a big file into memory takes memory. Note that the way you're loading in the file,
@l_Data = <FILE>;
is going to incur overhead for each line and for the array that holds references to all of the lines. Depending on your line lengths, that overhead could be substantial.
Compare that do
local $/;
$data = <FILE>;
which slurps the entire file into a single string. It'll still take a fair chunk of memory, with (possibly significantly) less overhead.
However, if your process involves morphing the text of the existing file, you might be better holding an array of lines, since the incremental cost of making a copy of a line is considerably less than making a copy of a 25Mb string.
What can you say about your processing needs? Perhaps there's a better way yet.
| [reply] [d/l] [select] |
Re: Loading Large files eats away Memory
by BrowserUk (Patriarch) on May 26, 2005 at 06:51 UTC
|
Take a look at Tie::File. It allows you to treat a file as an array, without the overhead of having it all loaded at once.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
| [reply] |
|
|
| [reply] |
|
|
In that case I would load the file into memory as a string (25MB+ alittle bit) and the open that string as a file using perl's "memory file" facility.
I'd then pass the memory filehandle to Tie::File and have it take care of performing the indexing and seeking required to treat the string as an array of lines.
If the file needs to be modified, it just requires that the file be rewound and a single write to update it when processing is finished.
Using T:F's 'memory' option you can decide how much memory you have to trade for speed:
#! perl -slw
use strict;
use Tie::File;
open IN, '<:raw', $ARGV[ 0 ] or die $!;
my $data = do{ local $/ = -s( $ARGV[ 0 ] ); <IN> };
close IN;
open my $fh, '+<', \$data or die $!;
tie my @lines, 'Tie::File', $fh, memory => 20_000_000;
print for @lines[ 100_000, 200_000, 300_000, 400_000 ];
@lines[ 100_000, 200_000, 300_000, 400_000 ] = ( 'modified' ) x 4;
print for @lines[ 100_000, 200_000, 300_000, 400_000 ];
<STDIN>; ## Approx 60 MB here. 25MB file + 20 MB I configured for Tie:
+:File + overhead.
__END__
P:\test>460532 bigfile.txt
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
modified
modified
modified
modified
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
| [reply] [d/l] |
|
|
|
|
|
Re: Loading Large files eats away Memory
by ikegami (Patriarch) on May 26, 2005 at 07:20 UTC
|
Just two quick comments before heading to bed...
@l_Data = <FILE>; uses up at least twice 25MB (plus overhead), since the entire file is placed on the stack beore being put into @l_Data. push(@l_Data, $_) while <FILE>; is probably much better since only one line is on the stack at any given time.
perl generally doesn't release memory to the Operating System, only to itself. perl can reuse freed memory (such as the memory used by @l_Data), but the process size doesn't shrink. That's why the second call to Test barely uses up any memory from the OS's perspective.
| [reply] [d/l] [select] |
Re: Loading Large files eats away Memory
by sk (Curate) on May 26, 2005 at 07:10 UTC
|
This is not related to the question you had but just FYI - it is better to check for failure on open
open(FILE,'myfile') or die $!
I haven't checked the module suggested by BrowserUK but I tried this -
my $i = 0; $l_Data[$i++] = $_ while (<FILE>);
By reading the file line by line and populating the array uses only 45Megs.
If it is not too complicated then you can give us some idea on why you need to load the file into memory. For example, most statistics based algorithms can be implemented such that you don't have to keep the entire file in memory
cheers
SK | [reply] [d/l] [select] |
Re: Loading Large files eats away Memory
by Hena (Friar) on May 26, 2005 at 07:07 UTC
|
If you are handling big files perline basis then use while loop instead of slurping.
while (<INPUT>) {
# handle file
my $line = $_;
}
Ofcourse by setting '$/' you can get different chuncks than lines. | [reply] [d/l] |
Re: Loading Large files eats away Memory
by Fletch (Bishop) on May 26, 2005 at 12:10 UTC
|
You say you need the complete file in an array, but you may be able to rework what you're doing to avoid this. See if you can figure out some minimum amount of items from the file that you need to work on (e.g. the first two columns are the username and the frobnitz, the rest can be read on demand). Then either:
- throw the data into a DB or DBM file keyed by username/frobnitz
- preprocess the file once line by line building a hash of username/frobnitz to the offset returned by tell
Once you've done one of those things you process the list of keys, retrieving the other data as needed (by reading from a tied hash or using seek to move back and forth in the file).
| [reply] [d/l] [select] |
Re: Loading Large files eats away Memory
by Anonymous Monk on May 27, 2005 at 15:01 UTC
|
I have found that the following works well for me.
LOOP: foreach $key (@keys) {
$UseThisFile = $LogFileHash{$key};
push(@FileList,$UseThisFile);
open(FILE,"<$UseThisFile") || die "Could not open $UseThisFile
+because $!\n";
# look for the string going backwards
$count = system("wc -l $UseThisFile");
for ($i=$count; <FILE>; $i--) {
$FoundLine = $i, last LOOP if ($File[$i] =~ m/$Value/i);
# $FoundLine = $i, last LOOP if (index($File[$i],$Value) >
+=0) ;
}
close FILE;
}
| [reply] [d/l] |