in reply to FIle Seeking
I wrote an HL7 Browser using Perl::Tk that regularly is asked to slurp in 100+ MB files. Not only that but portions of that data are parsed and loaded into HList widgets. Obviously it's consuming 200, 300 or more MB in system memory, but the point is if your loop is not surviving past line 4300, then there is another issue. These 100 MB files I deal with have hundreds of thousands of lines in them.
I'd definitely agree that if you can think of a better way to handle your situation then you should do so, but I would hate to see you go through a bunch of conversion work then find that wasn't the real problem. It might be helpful if we could take a gander at more of your code here. Another thought is that perhaps you have a file with an un-timely EOF marker in it.
As a more permanent solution I like maverick's idea of using MySQL or PostgreSQL, but another method that might work and is far more simple to impliment is a GDBM database. This is another type of file that I have seen work well even as it grows to tens of MB in size. (It works fine at the 150+ MB size, but can take hours to do a reorganization). Do a search for GDBM_File for more information. Yet another way to do it might be to use the filesystem to break your information up into directories to make it a little faster to parse.
Good luck, however you decide to proceed,
{NULE}
--
http://www.nule.org
|
|---|