I have a large dataset that needs conversion from one format to another. As there are several minor input formats and output formats, I've seperated the input filter into input modules, and the output filter into output models.
So right now my program looks (simplified) something like
use Input::X; use Output::Y; my @data = Input::X::read(); print Output::Y::get_output (@data);
Now, the datasets are large, and the last test run used something like 700 megs of ram. While that's not so terrible, it's not great either.
Is there a way to preserve modularity and speed things up? It is possible to run like a typical unix filter and output a line of output for every line of input, however I want to avoid dependencies in the module code. Is there a 'standard' way to do so?
Again, I'd love to have it run as a tradditional filter, but I'd also like to do so with minimal coupling between the input and output sides.
janitored by ybiC: Corrected "effiecient" mis-spelling in node title
In reply to Efficient and modular filtering by roju
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |