in reply to processing large files

Your problem appears to be that the data file is too big. That would indicate you are reading the whole file into memory and then processing. You need to switch to an alogrithm that processes each line as it is read (if possible). If this is the case, post the code you have so far.

I don't know about the -Duselargefiles switch, but it appears to refer to the perl scrpt size. Can anyone clarify?

Replies are listed 'Best First'.
Re: Re: processing large files
by filmo (Scribe) on Jul 04, 2001 at 22:44 UTC
    I agree. Since you indicated that it is a file of records (plural), it seems like there would be a way to read each record individually and accomplish your goals.

    In fact, can someone give an example of the need to read a >2gig file and process it as a chunk -- binary files not withstanding.
    --
    Filmo the Klown

Re: Re: processing large files
by Anonymous Monk on Jul 07, 2001 at 01:08 UTC
    I think I am reading the file line by line
    here is some code from a test script I am trying to
    get to work:
    open(IN_FILE, "cat $fileName |" die "Can't cat inputfile: $!\n"; while (<IN_FILE>) { chop; ......some processing code... }


    any ideas?

    -E