Perl has a default method of reading a file one line at a time, you can then process this line and write it back out to a new file.
You can tell perl to use any string of characters as an input record seperator (the $/ variable) and then read your file one record at a time, this defaults to the end of line character for your OS so perl reads the file line at a time.open FILE_HANDLE, "<", $File_Name or die "probs opening $File_Name: $! +\n"; while (<FILE_HANDLE>) { # defaults to one line at a time # do something to this line that is now # contained in the $_ variable }
# imagine each record starts with "New Record" local $/="New Record"; open FILE_HANDLE, "<", $File_Name or die "probs opening $File_Name: $! +\n"; while (<FILE_HANDLE>) { # do something to this block now # contained in the $_ variable }
There are also various modules that tie a file on disk to a data structure without reading it all in. This may also be helpful. Have a look at Tie::File for instance.
If you give some more specific details of what you are trying to do we can give more help.
Cheers,
R.
In reply to Re: how can i work with huge files?
by Random_Walk
in thread how can i work with huge files?
by morfeas
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |