in reply to Slurping BIG files into Hashes
A quick back-of-the-envelope: 30 minutes to load ~160,000 records is roughly 90 records/second. That seems pretty slow. Have you tried instrumenting the code to take some timings? If you dumped a timestamp (or a delta) every 1K records, you might see an interesting slowdown pattern. Correlating this with a trace of your systems memory availability might show what memory is an issue, particularly if the system starts swapping at some point during the load.
Can you say more about the form of the keys and values? There might be something about their nature that you could exploit to find a different data structure.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Re: Slurping BIG files into Hashes
by waswas-fng (Curate) on Jun 18, 2003 at 18:56 UTC | |
by Elgon (Curate) on Jun 18, 2003 at 19:33 UTC | |
by jsprat (Curate) on Jun 19, 2003 at 02:26 UTC | |
|
Re: Re: Slurping BIG files into Hashes
by waswas-fng (Curate) on Jun 18, 2003 at 18:46 UTC |