in reply to Table Generation vs. Flat File vs. DBM
Optimizing here largely depends on whether you will need to access all these values during the run of your program or not. If you don't need to access all values during one run, it may save you some time to separate the hash data into an external indexed file, as then Perl won't have to build the hash from a list every time the process is started. If you need to access all the data all the time, you could try Storable to make Perl load the data structures faster than from source code.
A real database would be ideal, as it could cache the data in RAM between runs of your programs, but you already said that was out of the question. The "next best thing" could also be to start your program through PPerl, so the bulk of it stays resident, or to write a custom "data daemon", which only holds that data in RAM and serves inquiries to that data.
All these solutions sound interesting, but first of all, you need to benchmark, benchmark, Benchmark and keep track of your changes, together with the benchmarks, for example in an Excel sheet, so you can track your progress and, much more importantly, figure out whether the increased risks in maintainability and program operation are worth the increase in throughput. If you have one long-running process, the initialization cost is most likely amortized over time anyway and changes to your processing algorithm will yield much better results than changes to the process startup.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Re: Table Generation vs. Flat File vs. DBM
by mhearse (Chaplain) on May 05, 2004 at 06:39 UTC |