roju has asked for the wisdom of the Perl Monks concerning the following question:
I have a large dataset that needs conversion from one format to another. As there are several minor input formats and output formats, I've seperated the input filter into input modules, and the output filter into output models.
So right now my program looks (simplified) something like
use Input::X; use Output::Y; my @data = Input::X::read(); print Output::Y::get_output (@data);
Now, the datasets are large, and the last test run used something like 700 megs of ram. While that's not so terrible, it's not great either.
Is there a way to preserve modularity and speed things up? It is possible to run like a typical unix filter and output a line of output for every line of input, however I want to avoid dependencies in the module code. Is there a 'standard' way to do so?
Again, I'd love to have it run as a tradditional filter, but I'd also like to do so with minimal coupling between the input and output sides.
janitored by ybiC: Corrected "effiecient" mis-spelling in node title
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: effiecient and modular filtering
by eric256 (Parson) on May 27, 2004 at 15:47 UTC | |
by roju (Friar) on May 27, 2004 at 16:37 UTC | |
|
Re: Efficient and modular filtering
by graff (Chancellor) on May 28, 2004 at 04:07 UTC |