in reply to Re: Reading HUGE file multiple times
in thread Reading HUGE file multiple times
Hi there,
Thanks for the tips. My data looks something like
>ID
Data (a verrry long string of varying length in a single line)
>ID again
Data again
Indexing might be a good idea. Maybe I could only read the IDs (skipping the next line) and then when accessing just add +1 to the index? I need to extract them twice in a code in different subroutines and each time the subroutine specifies what to do with them. I don't know if it is a good idea to store it all in a hash. I only need to extract a fragment of the data in first read and the whole data entry in the other. I don't have the IDs in advance, the suroutine specifies which one I need and what to do with it. I've tried
$Library_Index{<$Library>} = tell(ARGV), scalar <$Library> until eof();but it takes very long time to do. I wonder if there is a better way to do it since this would be a bottleneck.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^3: Reading HUGE file multiple times
by BrowserUk (Patriarch) on Apr 28, 2013 at 13:30 UTC | |
by Anonymous Monk on Apr 28, 2013 at 13:36 UTC | |
by BrowserUk (Patriarch) on Apr 28, 2013 at 13:45 UTC | |
by Anonymous Monk on Apr 28, 2013 at 14:16 UTC | |
by BrowserUk (Patriarch) on Apr 28, 2013 at 14:42 UTC | |
| |
by Anonymous Monk on Apr 28, 2013 at 14:06 UTC |