in reply to how to split huge file reading into multiple threads
I have a huge file of millions of record.... But it takes huge time around 2+ hours
How many millions?
Perl can process 4 million records in 2.5 seconds:
perl -MTime::HiRes=time -E"BEGIN{$t=time()}" -nle"++$n }{ printf qq[$n records in %f seconds\n], time-$t" 250MB.CSV 4194304 records in 2.518000 seconds
So, how about you post your code and let us help you fix it?
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: how to split huge file reading into multiple threads
by sagarika (Novice) on Aug 30, 2011 at 09:39 UTC | |
by BrowserUk (Patriarch) on Aug 30, 2011 at 09:57 UTC | |
by sagarika (Novice) on Sep 02, 2011 at 08:34 UTC | |
by BrowserUk (Patriarch) on Sep 02, 2011 at 09:06 UTC | |
by sagarika (Novice) on Sep 07, 2011 at 06:06 UTC |